The most advanced ML Observability product in the market
Building an ML platform is nothing like putting together Ikea furniture; obviously, Ikea is way more difficult. However, they both, similarly, include many different parts that help create value when put together. As every organization sets out on a unique path to building its own machine learning platform, taking on the project of building a […]
Start integrating our products and tools.
We’re excited 😁 to share that Forbes has named Aporia a Next Billion-Dollar Company. This recognition comes on the heels of our recent $25 million Series A funding and is a huge testament that Aporia’s mission and the need for trust in AI are more relevant than ever. We are very proud to be listed […]
DataFrames are great for data cleaning, analysis, and visualization. However, they cannot be used in storing or transferring data. Once we are done with our analysis, we need to write the DataFrame into a file.
One of the commonly used file formats for this purpose is CSV. In this how-to article, we will learn how to write Pandas and PySpark DataFrames to a CSV file
The to_csv function can be used for this task. We just need to specify the file path.
df.to_csv("project-1/results.csv")
If we use the default parameter values, row names (i.e. index) are written in the file so when we read back the csv file into a DataFrame, the index will be shown as a new column. We can change this behavior by setting the index parameter as False.
We also have the option to write only some of the columns, which is useful if there are some redundant columns in the DataFrame. The list of columns to be written is given to the columns parameter.
df.to_csv( "project-1/results.csv", index=False, columns=["f1","f2", "f3"] )
We can use the csv method provided by the DataFrameWriter class.
df.write.csv("project-1/results")
This line of code creates a folder named results and writes the csv file in it. PySpark writes files in partitions by default. We can have the data written in a single csv file by using the repartition method.
df.repartition(1).write.csv("project-1/results")
Unlike Pandas, PySpark does not write the header (i.e. column names) to the csv file. We can change this behavior by setting the header option as True.
df.repartition(1).write.option("header", True).csv("project-1/results")