Functions, Users, and Comparative Analysis
We decided that Docs should have prime location.
Build AI products you can trust.
We’re super excited to share that Aporia is now the first ML observability offering integration to the Databricks Lakehouse Platform. This partnership means that you can now effortlessly automate your data pipelines, monitor, visualize, and explain your ML models in production. Aporia and Databricks: A Match Made in Data Heaven One key benefit of this […]
Fundamentals of ML observability
Metrics, feature importance and more
We’re excited 😁 to share that Forbes has named Aporia a Next Billion-Dollar Company. This recognition comes on the heels of our recent $25 million Series A funding and is a huge testament that Aporia’s mission and the need for trust in AI are more relevant than ever. We are very proud to be listed […]
DataFrame is a two-dimensional data structure, which consists of labeled rows and columns. Each row can be considered as a data point or observation, and the columns represent the features or attributes of the data points.
We can create a DataFrame by stacking data points (i.e. appending one row at a time). In this short how-to article, we will learn how to perform this task for Pandas and PySpark DataFrames.
We can use the loc method to append rows to a DataFrame. The number of values we provide must match the number of columns in the DataFrame.
Consider we have a list of names and a list of scores. We want to create a DataFrame from the values in these lists. It will be a DataFrame with two columns: Names and Score. The first step is to create an empty DataFrame with two columns. Then, we append rows in a for loop.
# import Pandas import pandas as pd # Python lists names = ["John", "Jane", "Abby", "Matt"] scores = [80, 94, 87, 85] # Create empty DataFrame df = pd.DataFrame(columns=["Name", "Score"], dtype="int") # Append rows for i in range(len(names)): df.loc[i] = [names[i], scores[i]]
The for loop takes the first items in both lists and appends as the first row to the DataFrame, and then the second ones, and so on.
We can add new rows to a DataFrame by using the union function, which is also used for combining DataFrames with the same schema. Before a row is appended to the DataFrame, it needs to be converted to a DataFrame. Therefore, this operation is more like appending a DataFrame with one row.
# import libraries and initialize a spark session from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType, IntegerType spark = SparkSession.builder.getOrCreate() # Python lists names = ["John", "Jane", "Abby", "Matt"] scores = [80, 94, 87, 85] # Create empty DataFrame schema = StructType([ StructField("Names", StringType()), StructField("Scores", IntegerType()) ]) df = spark.createDataFrame([], schema) # Append rows for i in range(len(names)): row_to_append = [(names[i], scores[i])] columns = ["Names", "Scores"] df_to_append = spark.createDataFrame(row_to_append, columns) df = df.union(df_to_append)