-
Notifications
You must be signed in to change notification settings - Fork 201
[wip] Add scripts for running benchmarks on EC2 #1654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1654 +/- ##
============================================
+ Coverage 56.12% 58.65% +2.52%
- Complexity 976 1142 +166
============================================
Files 119 129 +10
Lines 11743 12640 +897
Branches 2251 2363 +112
============================================
+ Hits 6591 7414 +823
- Misses 4012 4049 +37
- Partials 1140 1177 +37 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
The current status is that Comet is slower than Spark. Both Spark and Comet take an extremely long time to complete the benchmark (~50 minutes compared to 10 minutes for Spark and 5 minutes for Comet when running on my local workstation) |
That is very odd. We don't see the same in an EKS cluster with S3 storage. Is this consistently bad? Not a noisy neighbor issue, I hope? |
# | ||
|
||
# Install Rust | ||
sudo yum groupinstall -y "Development Tools" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My RHEL9 EC2 instance did not have wget
. May be we should add yum install wget --y
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
dev/benchmarks/setup.sh
Outdated
wget https://dlcdn.apache.org/spark/spark-3.5.5/spark-3.5.5-bin-hadoop3.tgz | ||
tar xzf spark-3.5.5-bin-hadoop3.tgz | ||
cp spark-env.sh spark-3.5.5-bin-hadoop3/conf | ||
export SPARK_HOME=/home/ec2-user/spark-3.5.5-bin-hadoop3/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this line has a trailing /
dev/benchmarks/setup.sh
Outdated
git clone https://github.com/apache/datafusion-benchmarks.git | ||
|
||
# Install Spark | ||
mkdir /home/ec2-user/tmp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the intention here to download and run Spark from /home/ec2-user/tmp
? If so, we need to change the below lines to work from the /tmp directory. I would also download only if spark does not exist. How about this?
# Create tmp directory and install Spark if not already present
mkdir -p /home/ec2-user/tmp
cd /home/ec2-user/tmp
if [ ! -d "spark-3.5.5-bin-hadoop3" ]; then
wget https://dlcdn.apache.org/spark/spark-3.5.5/spark-3.5.5-bin-hadoop3.tgz
tar xzf spark-3.5.5-bin-hadoop3.tgz
fi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, the intent was to install Spark in the default working directory (/home/ec2-user/
). We need to create the /home/ec2-user/tmp
folder so that we can use tmp storage on the primary EBS volume rather than in /tmp
which is mounted on a small volume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I implemented your feedback to only download if it is not already downloaded
Which issue does this PR close?
Part of #1636
Rationale for this change
Make it easier for anyone to run the 100 GB benchmark on EC2 with local disk.
What changes are included in this PR?
Scripts and documentation.
How are these changes tested?
Manually.