Functions, Users, and Comparative Analysis
We decided that Docs should have prime location.
Build AI products you can trust.
We’re super excited to share that Aporia is now the first ML observability offering integration to the Databricks Lakehouse Platform. This partnership means that you can now effortlessly automate your data pipelines, monitor, visualize, and explain your ML models in production. Aporia and Databricks: A Match Made in Data Heaven One key benefit of this […]
Fundamentals of ML observability
Metrics, feature importance and more
We’re excited 😁 to share that Forbes has named Aporia a Next Billion-Dollar Company. This recognition comes on the heels of our recent $25 million Series A funding and is a huge testament that Aporia’s mission and the need for trust in AI are more relevant than ever. We are very proud to be listed […]
DataFrames are great for data cleaning, analysis, and visualization. However, they cannot be used in storing or transferring data. Once we are done with our analysis, we need to write the DataFrame into a file.
One of the commonly used file formats for this purpose is CSV. In this how-to article, we will learn how to write Pandas and PySpark DataFrames to a CSV file
The to_csv function can be used for this task. We just need to specify the file path.
df.to_csv("project-1/results.csv")
If we use the default parameter values, row names (i.e. index) are written in the file so when we read back the csv file into a DataFrame, the index will be shown as a new column. We can change this behavior by setting the index parameter as False.
We also have the option to write only some of the columns, which is useful if there are some redundant columns in the DataFrame. The list of columns to be written is given to the columns parameter.
df.to_csv( "project-1/results.csv", index=False, columns=["f1","f2", "f3"] )
We can use the csv method provided by the DataFrameWriter class.
df.write.csv("project-1/results")
This line of code creates a folder named results and writes the csv file in it. PySpark writes files in partitions by default. We can have the data written in a single csv file by using the repartition method.
df.repartition(1).write.csv("project-1/results")
Unlike Pandas, PySpark does not write the header (i.e. column names) to the csv file. We can change this behavior by setting the header option as True.
df.repartition(1).write.option("header", True).csv("project-1/results")