Initialize a new Expectation Suite by profiling a batch of your data.

This process helps you avoid writing lots of boilerplate when authoring suites by allowing you to select columns and other factors that you care about and letting a profiler write some candidate expectations for you to adjust.

Expectation Suite Name: data_quality_expectation_demo

import os
os.chdir('/home/thulasiram/personal/going_deep_and_wide/GiveDirectly/gx_tutorials/great_expectations')
import datetime

import pandas as pd

import great_expectations as gx
import great_expectations.jupyter_ux
from great_expectations.core.batch import BatchRequest
from great_expectations.checkpoint import SimpleCheckpoint
from great_expectations.exceptions import DataContextError

context = gx.data_context.DataContext()

batch_request = {'datasource_name': 'data_quality_demo', 'data_connector_name': 'default_inferred_data_connector_name', 'data_asset_name': 'yellow_tripdata_sample_2019-01.csv', 'limit': 1000}

expectation_suite_name = "data_quality_expectation_demo"

validator = context.get_validator(
    batch_request=BatchRequest(**batch_request),
    expectation_suite_name=expectation_suite_name
)
column_names = [f'"{column_name}"' for column_name in validator.columns()]
print(f"Columns: {', '.join(column_names)}.")
validator.head(n_rows=5, fetch_all=False)
2022-12-23T15:35:54+0530 - INFO - Great Expectations logging enabled at 20 level by JupyterUX module.
Columns: "vendor_id", "pickup_datetime", "dropoff_datetime", "passenger_count", "trip_distance", "rate_code_id", "store_and_fwd_flag", "pickup_location_id", "dropoff_location_id", "payment_type", "fare_amount", "extra", "mta_tax", "tip_amount", "tolls_amount", "improvement_surcharge", "total_amount", "congestion_surcharge".
vendor_id pickup_datetime dropoff_datetime passenger_count trip_distance rate_code_id store_and_fwd_flag pickup_location_id dropoff_location_id payment_type fare_amount extra mta_tax tip_amount tolls_amount improvement_surcharge total_amount congestion_surcharge
0 1 2019-01-15 03:36:12 2019-01-15 03:42:19 1 1.0 1 N 230 48 1 6.5 0.5 0.5 1.95 0.0 0.3 9.75 NaN
1 1 2019-01-25 18:20:32 2019-01-25 18:26:55 1 0.8 1 N 112 112 1 6.0 1.0 0.5 1.55 0.0 0.3 9.35 0.0
2 1 2019-01-05 06:47:31 2019-01-05 06:52:19 1 1.1 1 N 107 4 2 6.0 0.0 0.5 0.00 0.0 0.3 6.80 NaN
3 1 2019-01-09 15:08:02 2019-01-09 15:20:17 1 2.5 1 N 143 158 1 11.0 0.0 0.5 3.00 0.0 0.3 14.80 NaN
4 1 2019-01-25 18:49:51 2019-01-25 18:56:44 1 0.8 1 N 246 90 1 6.5 1.0 0.5 1.65 0.0 0.3 9.95 0.0

Select columns

Select the columns on which you would like to set expectations and those which you would like to ignore.

Great Expectations will choose which expectations might make sense for a column based on the data type and cardinality of the data in each selected column.

Simply comment out columns that are important and should be included. You can select multiple lines and use a Jupyter keyboard shortcut to toggle each line: Linux/Windows: Ctrl-/, macOS: Cmd-/

Other directives are shown (commented out) as examples of the depth of control possible (see documentation for details).

exclude_column_names = [
    "vendor_id",
    "pickup_datetime",
    "dropoff_datetime",
    #"passenger_count",
    "trip_distance",
    "rate_code_id",
    "store_and_fwd_flag",
    "pickup_location_id",
    "dropoff_location_id",
    "payment_type",
    "fare_amount",
    "extra",
    "mta_tax",
    "tip_amount",
    "tolls_amount",
    "improvement_surcharge",
    "total_amount",
    "congestion_surcharge",
]

Run the OnboardingDataAssistant

The suites generated here are not meant to be production suites – they are a starting point to build upon.

To get to a production-grade suite, you will definitely want to edit this suite after this initial step gets you started on the path towards what you want.

This is highly configurable depending on your goals. You can ignore columns, specify cardinality of categorical columns, configure semantic types for columns, even adjust thresholds and/or different estimator parameters, etc. You can find more information about OnboardingDataAssistant and other DataAssistant components (please see documentation for the complete set of DataAssistant controls) how to choose and control the behavior of the DataAssistant tailored to your goals.

Performance considerations: - Latency: We optimized for an explicit “question/answer” design, which means we issue lots of queries. Connection latency will impact performance. - Data Volume: Small samples of data will often give you a great starting point for understanding the dataset. Consider configuring a sampled asset and profiling a small number of batches.

result = context.assistants.onboarding.run(
    batch_request=batch_request,
    exclude_column_names=exclude_column_names,
)
validator.expectation_suite = result.get_expectation_suite(
    expectation_suite_name=expectation_suite_name
)

Save & review your new Expectation Suite

Let’s save the draft expectation suite as a JSON file in the great_expectations/expectations directory of your project and rebuild the Data Docs site to make it easy to review your new suite.

print(validator.get_expectation_suite(discard_failed_expectations=False))
validator.save_expectation_suite(discard_failed_expectations=False)

checkpoint_config = {
    "class_name": "SimpleCheckpoint",
    "validations": [
        {
            "batch_request": batch_request,
            "expectation_suite_name": expectation_suite_name
        }
    ]
}
checkpoint = SimpleCheckpoint(
    f"{validator.active_batch_definition.data_asset_name}_{expectation_suite_name}",
    context,
    **checkpoint_config
)
checkpoint_result = checkpoint.run()

context.build_data_docs()

validation_result_identifier = checkpoint_result.list_validation_result_identifiers()[0]
context.open_data_docs(resource_identifier=validation_result_identifier)

Next steps

After you review this initial Expectation Suite in Data Docs you should edit this suite to make finer grained adjustments to the expectations. This can be done by running great_expectations suite edit data_quality_expectation_demo.