🤜🤛 Aporia partners with Google Cloud to bring reliability and security to AI Agents  - Read more

Back to Blog
How-To

How to Filter a DataFrame by Substring Criteria

Aporia Team Aporia Team 2 min read Sep 06, 2022

One of the commonly used methods for filtering textual data is looking for a substring. In this how-to article, we will learn how to filter string columns in Pandas and PySpark by using a substring.

How to Filter a DataFrame by Substring Criteria?

Pandas

We can use the contains method, which is available through the str accessor.

df = df[df["Fruit"].str.contains("Apple")]

Letter cases are important because “Apple” and “apple” are not the same strings. If we are not sure of the letter cases, the safe approach is to convert all the letters to uppercase or lowercase before filtering.

 

df = df[df["Fruit"].str.lower().str.contains("apple")]

PySpark

PySpark also has a contains method that can be used as follows:

from pyspark.sql import functions as F

df = df.filter(F.col("Fruit").contains("Apple"))

Letter cases cause strings to be different in PySpark too. We can use the lower or upper function to standardize letter cases before searching for a substring.

from pyspark.sql import functions as F

df = df.filter(F.lower(F.col("Fruit")).contains("apple"))

This question is also being asked as:

  • How to filter rows containing a string pattern from a Pandas DataFrame?
  • How can I search for a string in a column?

People have also asked for:

Rate this article

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Slack

On this page

Blog
Building a RAG app?

Consider AI Guardrails to get to production faster

Learn more
Table of Contents

Related Articles