How to Build an End-To-End ML Pipeline With Databricks & Aporia
This tutorial will show you how to build a robust end-to-end ML pipeline with Databricks and Aporia. Here’s what you’ll...
Prompt engineering sucks. Break free from the endless tweaking with this revolutionary approach - Learn more
Securing AI systems is tricky, ignoring it is risky. Discover the easiest way to secure your AI end to end - Learn more
DataFrames are great for data cleaning, analysis, and visualization. However, they cannot be used in storing or transferring data. Once we are done with our analysis, we need to write the DataFrame into a file.
One of the commonly used file formats for this purpose is CSV. In this how-to article, we will learn how to write Pandas and PySpark DataFrames to a CSV file
The to_csv function can be used for this task. We just need to specify the file path.
df.to_csv("project-1/results.csv")
If we use the default parameter values, row names (i.e. index) are written in the file so when we read back the csv file into a DataFrame, the index will be shown as a new column. We can change this behavior by setting the index parameter as False.
We also have the option to write only some of the columns, which is useful if there are some redundant columns in the DataFrame. The list of columns to be written is given to the columns parameter.
df.to_csv(
"project-1/results.csv",
index=False,
columns=["f1","f2", "f3"]
)
We can use the csv method provided by the DataFrameWriter class.
df.write.csv("project-1/results")
This line of code creates a folder named results and writes the csv file in it. PySpark writes files in partitions by default. We can have the data written in a single csv file by using the repartition method.
df.repartition(1).write.csv("project-1/results")
Unlike Pandas, PySpark does not write the header (i.e. column names) to the csv file. We can change this behavior by setting the header option as True.
df.repartition(1).write.option("header", True).csv("project-1/results")
This tutorial will show you how to build a robust end-to-end ML pipeline with Databricks and Aporia. Here’s what you’ll...
Dictionary is a built-in data structure of Python, which consists of key-value pairs. In this short how-to article, we will...
A row in a DataFrame can be considered as an observation with several features that are represented by columns. We...
DataFrame is a two-dimensional data structure with labeled rows and columns. Row labels are also known as the index of...
In this short how-to article, we will learn how to sort the rows of a DataFrame by the value in...
In a column with categorical or distinct values, it is important to know the number of occurrences of each value....
NaN values are also called missing values and simply indicate the data we do not have. We do not like...
DataFrame is a two-dimensional data structure, which consists of labeled rows and columns. Each row can be considered as a...