Read and Write Files From Amazon S3 Bucket With PySpark
To interact with Amazon S3 Bucket from Spark, you must use a compatible version of Spark, such as Spark 3.1 AWS .
This version already has the required .jar files to connect to a S3-compatible object storage.
|
-
Use Spark
3.1 AWS
. -
Create your Spark session with the following lines of code:
from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName("My Application") \ .config("spark.hadoop.fs.s3a.endpoint", "my-s3.endpoint") \ .config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") \ .config("fs.s3a.aws.credentials.provider", "com.amazonaws.auth.DefaultAWSCredentialsProviderChain") \ .config("spark.hadoop.fs.s3a.access.key", s3_access_key) \ .config("spark.hadoop.fs.s3a.secret.key", s3_secrey_key) \ .getOrCreate()
-
Do not specify the endpoint configuration if your S3 Bucket is hosted on AWS. This parameter is useful when your S3 Bucket is hosted by another provider, such as OVH. In that case, you will need to specify the full hostname, that is,
https://s3.gra.perf.cloud.ovh.net
. -
We recommend storing your S3 Bucket credentials, namely,
access_key
andsecrey_key
, in environment variables. -
Whenever you interact (read or write) with Amazon S3 Bucket, you must use the S3A protocol, as configured in the Spark session above.
-
-
You can now read and write files from Amazon S3 Bucket by running the following lines of code:
df = sql.read.parquet("s3a://path/to/my/file.parquet)
df = sql.write.parquet("s3a://path/to/my/file.parquet)
Performance tuningCloud Object Stores are not real filesystems, which has consequences on the performance. The Spark documentation is clear on this.
Also, make sure to read the recommendations on the best configuration based on your cloud provider.