The most advanced ML Observability platform
We’re super excited to share that Aporia is now the first ML observability offering integration to the Databricks Lakehouse Platform. This partnership means that you can now effortlessly automate your data pipelines, monitor, visualize, and explain your ML models in production. Aporia and Databricks: A Match Made in Data Heaven One key benefit of this […]
Start integrating our products and tools.
We’re excited 😁 to share that Forbes has named Aporia a Next Billion-Dollar Company. This recognition comes on the heels of our recent $25 million Series A funding and is a huge testament that Aporia’s mission and the need for trust in AI are more relevant than ever. We are very proud to be listed […]
One of the commonly used methods for filtering textual data is looking for a substring. In this how-to article, we will learn how to filter string columns in Pandas and PySpark by using a substring.
We can use the contains method, which is available through the str accessor.
df = df[df["Fruit"].str.contains("Apple")]
Letter cases are important because “Apple” and “apple” are not the same strings. If we are not sure of the letter cases, the safe approach is to convert all the letters to uppercase or lowercase before filtering.
df = df[df["Fruit"].str.lower().str.contains("apple")]
PySpark also has a contains method that can be used as follows:
from pyspark.sql import functions as F df = df.filter(F.col("Fruit").contains("Apple"))
Letter cases cause strings to be different in PySpark too. We can use the lower or upper function to standardize letter cases before searching for a substring.
from pyspark.sql import functions as F df = df.filter(F.lower(F.col("Fruit")).contains("apple"))
How to Sort a DataFrame by Two or More Columns?