Aporia How to's

How to Create a DataFrame by Appending One Row at a Time?

3 min read
append row to dataframe pandas pyspark

DataFrame is a two-dimensional data structure, which consists of labeled rows and columns. Each row can be considered as a data point or observation, and the columns represent the features or attributes of the data points. 

We can create a DataFrame by stacking data points (i.e. appending one row at a time). In this short how-to article, we will learn how to perform this task for Pandas and PySpark DataFrames.

How to Create a DataFrame by Appending One Row at a Time?

Pandas

We can use the loc method to append rows to a DataFrame. The number of values we provide must match the number of columns in the DataFrame.

Consider we have a list of names and a list of scores. We want to create a DataFrame from the values in these lists. It will be a DataFrame with two columns: Names and Score. The first step is to create an empty DataFrame with two columns. Then, we append rows in a for loop.

# import Pandas
import pandas as pd

# Python lists
names = ["John", "Jane", "Abby", "Matt"]
scores = [80, 94, 87, 85]

# Create empty DataFrame
df = pd.DataFrame(columns=["Name", "Score"], dtype="int")

# Append rows
for i in range(len(names)):
    df.loc[i] = [names[i], scores[i]]

The for loop takes the first items in both lists and appends as the first row to the DataFrame, and then the second ones, and so on.

PySpark

We can add new rows to a DataFrame by using the union function, which is also used for combining DataFrames with the same schema. Before a row is appended to the DataFrame, it needs to be converted to a DataFrame. Therefore, this operation is more like appending a DataFrame with one row.

# import libraries and initialize a spark session
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
spark = SparkSession.builder.getOrCreate()

# Python lists
names = ["John", "Jane", "Abby", "Matt"]
scores = [80, 94, 87, 85]

# Create empty DataFrame
schema = StructType([
    StructField("Names", StringType()),
    StructField("Scores", IntegerType())
])

df = spark.createDataFrame([], schema)

# Append rows
for i in range(len(names)):
    row_to_append = [(names[i], scores[i])]
    columns = ["Names", "Scores"]
    df_to_append = spark.createDataFrame(row_to_append, columns)
    df = df.union(df_to_append)

This question is also being asked as:

  • How to add an extra row to a Pandas DataFrame
  • Add new rows to PySpark DataFrame

People have also asked for:

Green Background

Control All your GenAI Apps in minutes