![]() |
![]() |
#1 |
Sep 2003
3×863 Posts |
![]()
How to search for Mersenne primes using the Amazon EC2 cloud platform (with Elastic File System)
Introduction The GIMPS project (at http://mersenne.org ) aims to find very large prime numbers; the primes it finds are usually record-breaking. The prime numbers it searches for are of a particular kind known as Mersenne primes. A program called Prime95 runs on Windows computers and performs a so-called Lucas-Lehmer test ("LL test") on candidates known as Mersenne numbers to see if they are Mersenne primes — the overwhelming majority are not, in fact the odds are maybe one in several hundred thousand. The Linux version of Prime95 is called mprime; it's the same program, but with a command-line interface rather than a GUI. The Prime95/mprime program can be downloaded from http://www.mersenne.org/download/ Some people run Prime95/mprime in the background on their desktop computer or laptop. It does its calculations and communicates the results to a server on the Internet known as PrimeNet. However it uses a computer's computing capabilities rather fully: your electricity bill will go up, your computer fan might start running faster and louder, and the extra heat generated might heat up your room a bit or make your air conditioner work a bit harder. An alternative to running Prime95/mprime on your own computer is to run it on a server. Some companies have a large array of server computers (aka "the cloud"), and some run a "public cloud" business that allows anyone to log in remotely over the Internet and use their cloud for various purposes (for a fee, of course). Examples of such cloud platforms include Amazon EC2, Google Compute Engine (GCE) and Microsoft Azure. This thread explains how to use Amazon EC2 to do Lucas-Lehmer testing in the cloud. There is another thread that explains how to use Google Compute Engine for the same purpose. Costs Using Amazon EC2 at the cheapest rates, you should be able to do about seven double-check tests a month (of Mersenne exponents in the 37M range) for a cost of roughly $15 a month. This is an estimate: the actual costs are determined by market forces and can fluctuate. You can easily scale up to whatever your budget allows (fourteen double-check exponents a month for roughly $30, and so forth). First-time LL tests (of exponents in the 67M range) will be about four times more expensive. You can do LL testing for cheaper than this if you are willing to order parts and wiring and assemble a barebones do-it-yourself compute farm in your basement. But if you want to experiment with the cloud, then read on. PrimeNet account You can optionally create a PrimeNet account at http://www.mersenne.org/gettingstarted/ You can contribute to the GIMPS project anonymously if you prefer, but having a PrimeNet account lets you accumulate "GigaHertz-days" credits and keep track of where you rank on a leaderboard compared to other project participants; you can also keep track of your results and your pending work more conveniently. Next section: Using Amazon EC2 for Lucas-Lehmer testing Last fiddled with by GP2 on 2017-06-07 at 21:30 |
![]() |
![]() |
![]() |
#2 |
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
![]() |
![]() |
![]() |
![]() |
#3 |
Sep 2003
3·863 Posts |
![]()
Regions and Availability zones
Amazon EC2 offers servers that are located in various different regions ("AWS regions") around the world, and within each region there are various "availability zones". Before you can actually get started with Lucas-Lehmer testing, a number of preliminary configuration steps are necessary, and they need to be performed separately for each AWS region that you intend to use. Detailed instructions are given in the later sections of this guide. The methods described here require a feature called Elastic File System (EFS). As of August 2017, EFS is only available in certain AWS regions so far: us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-central-1 (Frankfurt), eu-west-1 (Ireland), ap-southeast-2 (Sydney, Australia). Click here for an up-to-date list of AWS regions with EFS. However, it makes no difference what part of the world you yourself live in, you can use servers in nearly all AWS regions around the world without restriction (the exceptions are regions in China, and a region reserved for US government customers). Within each region, there are different "availability zones", which have a letter designation. For instance, the us-east-2 region has availability zones named us-east-2a, us-east-2b, us-east-2c. Note: these letter designations are not the same for each user. What you see as "-2a" might be labeled "-2b" for another Amazon EC2 customer. It is thought that Amazon scrambles these names in order to prevent everyone from just choosing zone "a" by default, and thereby overcrowding some availability zones while others remain underutilized. Prices For the type of work we will be doing, we will incur hourly charges and use so-called "spot instances". Like prices on a stock market, the "spot market" prices can fluctuate considerably, and are driven by market forces. These spot prices will vary (sometimes greatly) by region and by availability zone, and there are often very large discrepancies between different regions (or even different availability zones within the same region) that can persist for many months. To understand this pricing discrepancy phenomenon (the lack of so-called "arbitrage"), remember that many Amazon cloud customers run websites and other online services on their servers, and need to locate them in specific geographical areas where the majority of their customers live in order to provide good response times (low latency). Also many countries have privacy regulations that require online service providers to store personal data about their citizens on servers that are geographically located within the country. Many online services also have strict uptime and availability requirements, and it is impractical for them to shut down every now and then to migrate to a cheaper server. However, for the type of work we will be doing, we are free to select any region and availability zone, and migrate as often as we wish in search of the lowest spot prices. Spot price information can be found on the Amazon EC2 Spot Instances Pricing page. You can use-the drop-down menu to view prices in the different regions: Ohio, N. Virginia, Oregon, Northern California, and the various regions in other countries around the world. Look at the column labeled Linux/UNIX Usage and scroll down to the section titled Compute Optimized - Current Generation, and in particular the line for "c4.large". It is important to remember that the spot prices you see displayed on that page are valid only for the current moment, and can fluctuate at any time. If you are familiar with Google Compute Engine and its fixed prices for preemptible virtual machines, note that Amazon EC2 spot prices are not fixed but are determined by market forces. As of June 2017, the us-east-2 (Ohio) region is consistently less expensive for our purposes than the other AWS regions, often by a wide margin, and this situation has persisted for many months, with relatively stable pricing which has remained fairly similar among the various availability zones. Therefore it is recommended that you choose this region. However, because spot prices can, in principle, change very rapidly, and because EFS might become available in new AWS regions in the future, it is a good idea to keep an eye on new developments and price changes. The old way (This subsection is aimed at people who already have experience using EC2 for LL testing the "old way", before EFS was available. If you don't understand it, just skip this section) With EFS, running mprime in Amazon EC2 is much more convenient than before. You can now pretty much "set it and forget it", just like you would with physical computers. In the old way, or in regions that don't have EFS yet, you would launch a spot instance with two EBS volumes: one 8 GB delete-on-termination root volume with a standard operating system AMI such as Amazon Linux, and then an additional 1 GB do-not-delete-on-termination volume that contains the mprime work directory with the executable, the configuration files and worktodo and save files. When a spot instance terminates, the 1 GB volume is "orphaned". A small amount of manual intervention is then required to recover its data: the orphaned volume has to be attached to some other instance and then mounted, and its worktodo and save files copied over. This gets annoying if it has to be done often on a regular basis (for instance, if you set an aggressively low limit price that gets hit frequently). With EFS, there are no 1 GB volumes anymore. Rather, all the worktodo and save files for all the instances exist together on the EFS filesystem, as sibling subdirectories of one another. The EFS filesystem is permanent and is unaffected by the termination of any of the instances that use it. When spot instances terminate and new ones launch, it is now easy for the new instances to locate and take over the work of the old terminated instances. No manual intervention or recovery is needed. Prerequisites Amazon EC2 offers server computers with either Windows or Linux pre-installed. However, the spot prices for Windows versions are typically four or five times more expensive. Therefore we will run Linux on our Amazon EC2 servers. Some basic familiarity with the Linux command-line interface (the "shell" known as "bash") will be very helpful. Also, it will help if you are familiar with using ssh client programs to log into a Linux server remotely. If not, there is plenty of information about these topics on the Internet. However, your own computer (the one that you will use to log into Amazon EC2) does not have to run Linux, it can run Windows or MacOS or anything else. If you are familiar with Google Compute Engine, note that Amazon EC2 does not offer SSH in a browser window, so you will have to use an ssh client program. If you use Windows 10, the WSL (Windows Subsystem for Linux) has an ssh client program, or you can download free software. One popular program is PuTTY, which can be downloaded at http://www.chiark.greenend.org.uk/~sgtatham/putty/ Your AWS account You can access your Amazon AWS account (or sign up for one) at https://aws.amazon.com/ . You can log in with the same username and password used for shopping at Amazon.com You will need to provide a payment method (e.g. credit card) when you sign up for AWS. Next section: Configure ssh for the default security group Last fiddled with by GP2 on 2017-08-05 at 08:37 |
![]() |
![]() |
![]() |
#4 |
Sep 2003
3×863 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
In this section, you will set the permissions that will allow ssh logins to your instances. Go to the EC2 console at http://console.aws.amazon.com/ec2/ , then click on the "Security Groups" link in the left-hand-side menu. Make sure you are in the AWS region you intended to be in, and change it if necessary. The region name is indicated at the top right part of the page. Make sure it is a region where EFS is available. You will see a table with one or more lines. Click on the line that has "default" under the Group Name column. The check box on the right-hand side will fill up in a blue color. In the bottom half of the page, make sure the Inbound tab is selected, then click on the Edit button. Select SSH for the "Type" heading, which will automatically change "Port Range" to 22, and select "My IP" for the "Source" heading, then click on the blue Save button. Code:
Type Protocol Port Range Source SSH TCP 22 My IP ___________ NOTE: if your IP address ever changes in the future, you will need to repeat this step. Otherwise your ssh login attempts will time out and fail. If you want to avoid having to reconfigure this whenever your IP address changes, and if don't care about security, you can choose "Anywhere" instead of "My IP" for the "Source" heading, but that's not recommended. When you launch your instances in the future, make sure to launch them under this "default" security group, and not under any "launch wizard" security groups. Next section: Create a new security group for mounting EFS Last fiddled with by GP2 on 2017-06-08 at 05:14 |
![]() |
![]() |
![]() |
#5 |
Sep 2003
3·863 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
In this section, you will set the permissions that will allow your instances to access the EFS filesystem. Presumably you are still in the Security Groups page after the previous step. If not, go to the EC2 console at http://console.aws.amazon.com/ec2/, then click on the "Security Groups" link in the left-hand-side menu. Make sure you are in the AWS region you intended to be in, and change it if necessary. The region name is indicated at the top right part of the page. Make sure it is a region where EFS is available. First, make note of the security group ID of the "default" security group. It is of the form sg-xxxxxxxx, where each "x" is a hexadecimal digit. You will need this below. Click on the blue "Create Security Group" button. For "Security group name", choose something like efs-mount-target or whatever you like. For "Description", fill in something like "Security group for EFS mount targets", or whatever you like. For "VPC", keep it at the default value (this is the VPC you will use when you run all your instances). For "Security group rules", make sure the "Inbound" tab is selected, then click on the Add Rule button. Select "NFS" for the "Type" heading, which will automatically change "Port Range" to 2049, and select "Custom" for the "Source" heading, then fill in the text input box with the security group ID (of the form sg-xxxxxxxx, where each "x" is a hexadecimal digit) of the "default" security group. Code:
Type Protocol Port Range Source NFS TCP 2049 Custom ___________ Click on the blue "Create" button. This creates the efs-mount-target security group (or whatever you named it). It will also have a security group ID of the form sg-xxxxxxxx, but this will be different from the ID of the "default" security group. Write down or copy the security group ID sg-xxxxxxxx of this newly created security group. You will need it later. Next section: Make sure that you have a key pair for ssh logins Last fiddled with by GP2 on 2016-07-26 at 19:21 |
![]() |
![]() |
![]() |
#6 |
Sep 2003
3×863 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
In this section, you will verify (or create) the key pair (private key and public key) that you will use when logging into your instances with ssh. Go to the EC2 console at http://console.aws.amazon.com/ec2/, then click on the "Key Pairs" link in the left-hand-side menu. Make sure you are in the AWS region you intended to be in, and change it if necessary. The region name is indicated at the top right part of the page. Make sure it is a region where EFS is available. If there is an existing key pair that you already use when logging into instances using an ssh client program, then all is well and you can skip the rest of this section. Otherwise you will need to create a new key pair. Click on the blue "Create Key Pair" button. In the popup window, choose a name and fill in the "Key pair name" field. Choose the name carefully because it can't be changed later. I recommend including the name of the region in the name, for example ssh-us-west-2 or ssh-us-east-1 or ssh-eu-west-1 or whatever. Code:
Key pair name: ____________ Click on the "Create" button. A file will be automatically downloaded to your computer, its name will be the key pair name you chose in the previous step plus a ".pem" ending. Your ssh client program will need this file to log into instances. PuTTY program on Windows ( PuTTY can be downloaded at http://www.chiark.greenend.org.uk/~sgtatham/putty/ ) If you are using the popular PuTTY program on Windows, you need to convert the .pem file to a .ppk file. To do this, run the PuTTYgen program, then in the File menu, choose "Load private key". In the file selection box, change the filter at the bottom from "PuTTY Private Key Files (*.ppk)" to "All files (*.*), and then select the .pem file that was downloaded in the previous step. Click the Open button. Next, decide if you want to type a password or passphrase each time you log into an instance, for added security. If so, fill in the "Key passphrase" and "Confirm passphrase" fields (with the same text in each one). Then, in the File menu, choose "Save private key". In the save file box, set the File name field at the bottom to the same name as the key pair name (or whatever you like, but that's the most logical choice), and the "Save as type" should be "PuTTY Private Key Files (*.ppk)". Click the Save button. You now have a .ppk file with the same name as the .pem file that was downloaded earlier. PuTTY will need this file to log into instances. Next section: Make sure your IAM instance role exists and it has the right permissions Last fiddled with by GP2 on 2016-07-26 at 19:22 |
![]() |
![]() |
![]() |
#7 |
Sep 2003
3×863 Posts |
![]()
This part only needs to be done once. You don't need to repeat it for each AWS region.
In this part, you will create a "role" that your instances will be assigned to when they launch, which will allow them to run certain commands and access certain resources. In particular, your instances will need to have permission to run the "aws ec2 describe-instances" command. Go to the IAM Management page at https://console.aws.amazon.com/iam/home, then click on the "Roles" link in the left-hand-side menu. There might already be an IAM role (IAM instance role) that you are using when you launch your instances. If you want to use that existing role, skip the next few steps. If you are not already using an IAM instance role, create one: click on the blue "Create new role" button. We are now on the "Select Role Type" page. In the "AWS Service Roles" section, click the Select button for the line that says "Amazon EC2". We are now on the "Attach Policy" page. Skip this for now, and click on "Next Step" We are now on the "Set role name and review" page. Code:
Role name ____________ Code:
Role description ____________ For "Role description" you can keep the default "Allows EC2 instances to call AWS services on your behalf.", or choose whatever you like. Click on the blue "Create role" button. At this point, you have an IAM instance role, we now need to grant it some permissions (or "policies"). In the Roles page at https://console.aws.amazon.com/iam/home?#roles, click on that IAM instance role to select it. Setting "policies" for the IAM instance role We are now on a new page. Make sure the Permissions tab is selected. In the Managed Policies section, click on the blue "Attach Policy" button. In the next page, select the checkboxes next to "AmazonS3FullAccess" and "AmazonEC2RoleforSSM", then click on the blue "Attach Policy" button at the bottom of the page. (The purpose of the above is to give your instances permission to read and write from S3 buckets, for instance to easily copy savefiles from one AWS region to another using S3 buckets, and also to be able to use Run Command and Patch Manager with your instances for easier management). Go to the Inline Policies section. That section will be blank. (If it is not blank, then click on the blue "Create Role Policy" button, and skip to the next paragraph.) Click on the text that says "Inline Policies". A new line will appear that says "There are no inline policies to show. To create one, click here." Click on the blue "click here" text. Under Policy Generator, click the "Select" button. Effect: Allow AWS Service: Amazon EC2 Actions: DescribeInstances Amazon Resource Name (ARN): * Click on the "Add Statement" button. Click on the blue "Next Step" button at the bottom of the page. Click on the blue "Apply Policy" button at the bottom of the page. Next section: Setting up an EFS filesystem: create the filesystem Last fiddled with by GP2 on 2017-07-30 at 16:22 |
![]() |
![]() |
![]() |
#8 |
Sep 2003
1010000111012 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
In this section, we will create an EFS filesystem. Go to the Elastic File System page at https://console.aws.amazon.com/efs/home . The page will warn you if you are in an AWS region where EFS is not yet available. Make sure you are in the AWS region you intended to be in, and change it if necessary. The region name is indicated at the top right part of the page. Make sure it is a region where EFS is available. As of July 2016, only N. Virginia (us-east-1), Oregon (us-west-2), Ireland (eu-west-1) regions have EFS. Click on the blue "Create file system" button. We are now on a new page. For "VPC", keep it at the default value (this is the VPC you will use when you run all your instances). In the "Create mount targets" section, there are two or more rows. Keep all the entries under the "Subnets" column at the "default" values. Keep all the entries under the "IP address" column at "Automatic". Under the "Security group" column, remove the default security group in each row and add the efs-mount-target security group (or whatever you named it), which you created in the "Create a new security group for mounting EFS" section earlier. Click on the blue "Next Step" button. We are now on a new page. In the Add Tags section, the first line under the Key heading is "Name". You can fill in the Value field with something like worktodo or whatever you like. For the Performance Mode section, change this from "General Purpose" to "Max I/O". Click on the blue "Next Step" button. We are now on a new page. Click on the blue "Create File System" button. This creates the EFS filesystem. It will have a File System ID of the form fs-xxxxxxxx, where each "x" is a hexadecimal digit. Write down or copy the file system ID of this newly created filesystem. You will need it later. Next section: Setting up an EFS filesystem: make sure you have an instance running with the right permissions Last fiddled with by GP2 on 2017-11-04 at 09:43 Reason: Performance Mode should be Max I/O |
![]() |
![]() |
![]() |
#9 |
Sep 2003
3×863 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
After the previous section, your EFS filesystem has been created, but it's empty. You will need to create a directory structure and populate it with the mprime executable and configuration files. To do that, you will need to log into an instance using an ssh client program. This section makes sure a suitable instance already exists, or launches one if necessary. Before proceeding make sure you have an ssh client program and know how to use it. A later section will provide some basic information about how to use the popular PuTTY program for Windows. Go to the EC2 console at http://console.aws.amazon.com/ec2/, then click on the "Instances" link in the left-hand-side menu. Make sure you are in the same AWS region where you created the EFS filesystem in the previous step, and change it if necessary. The region name is indicated at the top right part of the page. Do you have any instances already running? If so, click on one of them, and then check the bottom half of the page for the "IAM role". If it has the same value as the IAM instance role mentioned in the "Make sure your IAM instance role exists and it has the right permissions" section above (mprime-instance-role or whatever you named it), then you can use that instance, and skip the rest of this section. But if you have no instances running, or if those instances have a blank "IAM role" value, you will need to launch a new instance. The easiest thing to do is to launch an on-demand instance of some instance type like t1.micro or t2.micro which is very inexpensive or even free-tier. To do so, click on the blue "Launch Instance" button. In the next page, the first line is "Amazon Linux AMI", so just choose that by clicking on the blue "Select button" In the next page, choose something like t2.nano or t2.micro, or choose one that says "Free tier eligible". Click on the gray "Next: Configure Instance Details" button. In the next page, set the following: Number of instances: keep this at 1 Network: keep the same default VPC id IAM role: Select the IAM instance role that was configured in the "Make sure your IAM instance role exists and it has the right permissions" section above (mprime-instance-role or whatever you named it). Keep everything else unchanged, then click on the gray "Next: Add Storage" button In the next page, do nothing and click on the gray "Next: Tag Instance" button In the next page, do nothing and click on the gray "Next: Configure Security Group" button In the next page, under "Assign a security group": change the setting to "Select an existing security group". Do not use one of the "launch wizard" security groups. Click on the "default" security group. Click on the blue "Review and Launch" button In the next page, click on the blue "Launch" button. In the next page, select "Choose an existing key pair" and in the "Select a key pair" field, select the key pair name that you chose (or created) in the "Make sure that you have a key pair for ssh logins" section above. Click the "I acknowledge..." check box, and click on the blue "Launch Instances" button. Wait a minute or two for the instance to finish launching. At this point, you have an instance running which has the required IAM role (IAM instance role) that allows it to access the EFS filesystem you created in a previous section. Next section: Setting up an EFS filesystem: run the ssh client program Last fiddled with by GP2 on 2017-07-30 at 16:40 |
![]() |
![]() |
![]() |
#10 |
Sep 2003
3·863 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
In the previous section, you launched an instance. You made sure that it had the correct IAM role (IAM instance role). In this section, you will log into that instance with an ssh client program. Go to the EC2 console at http://console.aws.amazon.com/ec2/, then click on the "Instances" link in the left-hand-side menu. Make sure you are in the same AWS region where you created the EFS filesystem in the previous steps, and change it if necessary. The region name is indicated at the top right part of the page. Click on the instance that was identified or created in the previous section. This will bring up the information for that instance in the bottom half of the page. In the bottom half of the page, verify once again that the "IAM role" field says mprime-instance-role or whatever you named it in the "Make sure your IAM instance role exists and it has the right permissions" section earlier. If not, start over with some other instance (go back to the previous section). Also in the bottom half of the page, verify the "Key pair name" field. This is the key pair name that your ssh client program will use, presumably it is the same key pair name from the "Make sure that you have a key pair for ssh logins" section earlier. Finally, and once again in the bottom half of the page, locate the "Public DNS" field, which will contain an entry similar to "ec2-nnn-nnn-nnn-nnn.REGION-NAME-HERE.compute.amazonaws.com", where each of the "nnn" parts of the "nnn-nnn-nnn-nnn" are numbers from 1 to 255, and together they are the representation of an IP address, and REGION-NAME-HERE is the name of the AWS region (e.g., us-east-1 for N. Virginia, us-west-2 for Oregon, etc). Make note of this, this is the host name that your ssh client program will use. Run your ssh client program, providing it with both the "Key pair name" and the "Public DNS" information mentioned above. If you use PuTTY for Windows, some basic information on how to use it is provided at the bottom of this section. You should now have a terminal window asking you to log in. It should say "login as:" Note: if you get a "network error" instead, perhaps you just launched the instance a minute ago and it is not yet ready to accept network connections. Wait a minute and try again. If you do not get a "login as:" prompt, and the terminal window simply times out, then perhaps your IP address has changed from what it was when you set it in the "Configure ssh for the default security group" section earlier. If so, go back to that section and redo the "My IP" setting, then try again. At the "login as:" prompt, enter ec2-user (note you cannot log in as "root"). If you chose not to create a passphrase in a previous section, you will now get a shell prompt for the Linux bash shell. If you did choose to create a passphrase in a previous section, you will now be asked for it. You will then see: Code:
login as: Authenticating with public key "imported-openssh-key" Passphrase for key "imported-openssh-key": PuTTY ( Note: as an alternative to using PuTTY, you could use the ssh command in Windows Subsystem for Linux. ) ( PuTTY can be downloaded at http://www.chiark.greenend.org.uk/~sgtatham/putty/ ) If you are using PuTTY on Windows as your ssh client, start the program. In the dialog box, go to the "Host Name (or IP address)" field and enter the "ec2-" string mentioned above, which comes from the "Public DNS" information for the instance. Then in the "Category" area on the left part of the dialog box, click on Connection --- SSH --- Auth (click on the "+" to expand "SSH" if necessary). , then in the "Private key file for authentication" text input box, click the "Browse..." button and select the key pair .ppk file that was mentioned (or created) in the "Make sure that you have a key pair for SSH logins" section. Click on the "Open" button, and then in the original dialog box, click on the "Open" button there too. You will probably get a warning box with a big yellow exclamation mark that says "The server's host key is not cached in the registry." and a bunch of other text. This is normal, click on "Yes". A terminal window will open. For the rest, continue as described above, in the main part of this section. Next section: Setting up an EFS filesystem: initial setup and configuration Last fiddled with by GP2 on 2017-07-30 at 16:46 |
![]() |
![]() |
![]() |
#11 |
Sep 2003
3×863 Posts |
![]()
This part will need to be done separately for each AWS region that you use (but for now let's just do one region).
In the previous section, you logged into an instance using your ssh client program. In this section, some familiarity with the "bash" shell of Linux will be helpful. You will perform initial setup and configuration of the EFS filesystem you created in the "Setting up an EFS filesystem: create the filesystem" section earlier. In that section, the newly-created filesystem was assigned a File System ID, which you wrote down. The File System ID is of the form fs-xxxxxxxx, where each "x" is a hexadecimal digit. At the command line prompt, enter a command similar to: Code:
FILE_SYSTEM_ID=fs-xxxxxxxx # STOP!! Change the "xxxxxxxx" to the right value Enter the following commands: Code:
availability_zone=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone) region=$(echo -n ${availability_zone} | sed 's/[a-z]$//') sudo mkdir /mnt-efs sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${FILE_SYSTEM_ID}.efs.${region}.amazonaws.com:/ /mnt-efs If the "mount" command fails (times out), one possibility is that you did not start the instance with the correct IAM role (IAM instance role) or security group. In this case, you must go back to the "Setting up an EFS filesystem: make sure you have an instance running with the right permissions" section and launch a new instance; you can't change the IAM role or security group of an already-running instance. Another possibility is that you did not configure the "efs-mount-target" security group with the right permissions to allow NFS access. Go back to the "Create a new security group for mounting EFS" section to do this, then try the "mount" command again. Enter the commands: Code:
cd /mnt-efs sudo mkdir mprime sudo chown ec2-user:ec2-user mprime cd mprime Go to http://www.mersenne.org/download/ to check what is the most recent version of mprime for Linux 64-bit. The following assumes it is p95v294b5 (version 29.4) Enter the commands: Code:
wget https://www.mersenne.org/ftp_root/gimps/p95v294b5.linux64.tar.gz mkdir p95v294b5 ln -s p95v294b5 prog cd prog tar xvzf ../p95v294b5.linux64.tar.gz cd .. Code:
mkdir instances cd instances mkdir c4.large However, optionally, if you know that you want to do so, you could also choose to enter the following command: Code:
mkdir c4.xlarge c4.2xlarge c4.4xlarge c4.8xlarge # enter names exactly! If you don't know how to use editor programs like vi on Linux, the simplest way to create a file is by copy-and-pasting an existing file that you created on your own computer. So to use the prime-init.txt sample version provided below, first edit it on your own computer, then copy the whole thing into your clipboard. Then run the commands: Code:
cd c4.large cat > prime-init.txt Then enter the command Code:
cd .. Sample prime-init.txt file (it is the same for all the subdirectories c4.large, c4.xlarge, etc): (the V5UserID line is blank, but you can enter a valid user ID as explained below): Code:
V24OptionsConverted=1 WGUID_version=2 StressTester=0 UsePrimenet=1 DialUp=0 V5UserID= WorkPreference=0 OutputIterations=10000 ResultsFileIterations=999999999 DiskWriteTime=30 NetworkRetryTime=2 NetworkRetryTime2=70 DaysOfWork=3 UnreserveDays=30 DaysBetweenCheckins= 1 NumBackupFiles=3 SilentVictory=1 Priority=1 RunOnBattery=1 [PrimeNet] Debug=0 ProxyHost= [Worker #1] You can also change the WorkPreference= line. It can have the following values: 0 — Whatever makes the most sense 2 — Trial factoring 100 — First time primality tests 101 — Double-checking 102 — World record primality tests 4 — P−1 factoring 104 — 100 million digit primality tests 1 — Trial factoring to low limits 5 — ECM on small Mersenne numbers 6 — ECM on Fermat numbers Here are sample local-init.txt files, one for each subdirectory. You can enter them in the same way as the prime-init.txt file Sample local-init.txt file for c4.large subdirectory: Note: If you are using mprime version 28 or earlier, use "ThreadsPerTest" instead of "CoresPerTest". But it is best to use the latest version. Code:
OldCpuSpeed=2900 NewCpuSpeedCount=0 NewCpuSpeed=0 RollingAverage=1000 RollingAverageIsFromV27=1 ComputerID=C4_L Memory=3072 during 7:30-23:30 else 3072 WorkerThreads=1 CoresPerTest=1 If you also optionally chose to mkdir the c4.xlarge, c4.2xlarge, etc. subdirectories in a previous step, then you need to create the following for them: Sample local-init.txt file for c4.xlarge subdirectory: Note: if you are using mprime version 28 or earlier, change CoresPerTest to ThreadsPerTest and add the line: AffinityScramble2=0213 Code:
OldCpuSpeed=2900 NewCpuSpeedCount=0 NewCpuSpeed=0 RollingAverage=1000 RollingAverageIsFromV27=1 ComputerID=C4_XL Memory=6144 during 7:30-23:30 else 6144 WorkerThreads=1 CoresPerTest=2 Sample local-init.txt file for c4.2xlarge subdirectory: Note: if you are using mprime version 28 or earlier, change CoresPerTest to ThreadsPerTest and add the line: AffinityScramble2=04152637 Code:
OldCpuSpeed=2900 NewCpuSpeedCount=0 NewCpuSpeed=0 RollingAverage=1000 RollingAverageIsFromV27=1 ComputerID=C4_2XL Memory=12288 during 7:30-23:30 else 12288 WorkerThreads=1 CoresPerTest=4 Sample local-init.txt file for c4.4xlarge subdirectory: Note: if you are using mprime version 28 or earlier, change CoresPerTest to ThreadsPerTest and add the line: AffinityScramble2=08192A3B4C5D6E7F Code:
OldCpuSpeed=2900 NewCpuSpeedCount=0 NewCpuSpeed=0 RollingAverage=1000 RollingAverageIsFromV27=1 ComputerID=C4_4XL Memory=26000 during 7:30-23:30 else 26000 WorkerThreads=1 CoresPerTest=8 Sample local-init.txt file for c4.8xlarge subdirectory: Note: if you are using mprime version 28 or earlier, change CoresPerTest to ThreadsPerTest and add the line: AffinityScramble2=0I1J2K3L4M5N6O7P8Q9RASBTCUDVEWFXGYHZ Code:
OldCpuSpeed=2900 NewCpuSpeedCount=0 NewCpuSpeed=0 RollingAverage=1000 RollingAverageIsFromV27=1 ComputerID=C4_8XL Memory=56000 during 7:30-23:30 else 56000 WorkerThreads=2 CoresPerTest=9 Note that the ComputerID= line naming scheme above is just a suggestion. You could use ComputerID=c4.large for instance, to make it literally match the instance type. Note the Memory= line is mostly irrelevant unless you do P−1 testing. The instance types have 3.75 GiB, 7.5 GiB, 15 GiB, 30 GiB and 60 GiB of memory respectively, for c4.large through c4.8xlarge respectively. Don't specify your own ComputerGUID value This section is intended for more experienced users. If you choose to copy your own existing files rather than use the above, I recommend you delete any ComputerGUID= line. This line will get automatically added when the mprime program starts up. Also omit any HardwareGUID= or FixedHardwareUID=1 lines. If multiple instances use the same ComputerGUID or HardwareGUID line, then PrimeNet thinks they are the one and the same computer. If you look at View the CPU's in your account at mersenne.org, there will be fewer entries there than the actual number of computers you have. However, I think it's OK to have multiple instances (of the same instance type) using the same ComputerID line, and we do so. Note that the DiskWriteTime is set to the default 30 minutes. If you don't run a lot of instances, you might want to reduce it to 10 minutes. There are some circumstances where save files do not get written when instances are terminated, in particular when you yourself terminate the instance from the EC2 console. A smaller setting like 10 minutes helps to ensure that no more than 10 minutes' work maximum is lost under those circumstances. However, if you run many dozens of LL testing instances simultaneously, or if you do the kind of work that creates large savefiles (things other than LL testing, such as P−1 testing or Fermat testing or ECM testing, using large B2 values), then you might want to keep the DiskWriteTime higher. This is because the EFS filesystem will throttle I/O if you only use a relatively small amount of disk space. If you see savefile names ending in .write being written very slowly over several minutes, or if simple Linux commands in your SSH terminal take a long time to execute, then your I/O is being throttled, and you should either do less I/O (use longer DiskWriteTime intervals), or increase your EFS filesystem disk space usage, which involves incurring higher charges. Multiple subdirectories of the same instance type This section is intended for more experienced users. If you are going through this procedure for the first time you should skip it. The above setup is simple and works for most purposes. Directly under the instances directory, we create one subdirectory named c4.large, and (optionally) others named c4.xlarge, c4.2xlarge, etc. and all the instances running under them ask the PrimeNet server to give them whatever work "makes the most sense" (WorkPreference=0 in the prime-init.txt file). This means faster machines will usually get first-time Lucas-Lehmer tests of larger Mersenne exponents, slower machines may work on double-checking of smaller Mersenne exponents, and older machines may do ECM testing to find facts, etc. However experienced users running multiple instances might want to have some of them doing one work type and others doing a different type. If you wish, in addition to having subdirectories with names corresponding exactly to an instance type (for example "c4.large"), you can have subdirectories that add a hyphenated suffix (for example "c4.large-doublechecking", "c4.large-ecm", etc). Also these subdirectories don't have to be in the first level directly underneath the "instances" directory (for example "instances/c4.large"), they can be one or more levels further down (for example "instances/doublechecking/c4.large"). Each of the {instance-type} or "{instance-type} + hyphenated suffix" subdirectories should have a "prime-init.txt" and "local-init.txt" file within it. For instance, your "c4.large-doublechecking" subdirectory could have a prime-init.txt file that changes WorkPreference=0 to WorkPreference=101 (for doublechecking), while simultaneously a "c4.large-LL" subdirectory has WorkPreference=100 for first-time Lucas-Lehmer tests. Typically the prime-init.txt will vary, as described above. Meanwhile the local-init.txt file usually won't vary for instances of the same instance type. It's OK (and even recommended) for instances of the same instance type to have the same ComputerID= line in this file; however, local-init.txt should not have any ComputerGUID= or a HardwareGUID= or FixedHardwareUID=1 line at all. A unique ComputerGUID line will get automatically generated by mprime when the script copies and renames local-init.txt to local.txt and then runs mprime. You could allocate different amounts of work to each work type by creating dummy subdirectories whose names start with i-. For example: c4.large-doublechecking could be created with two empty dummy subdirectories with names "i-foo1", "i-foo2". c4.large-LL could be created with four empty dummy subdirectories with names "i-foo1", "i-foo2", "i-foo3", "i-foo4" The exact names don't matter, but they must start with i-. Then after this setup, you can launch six instances of instance type "c4.large". Each instance will find one dummy subdirectory and rename it to its own instance-id, and then mprime will start up and automatically fetch work from the PrimeNet server. The work type will be as specified in the prime-init.txt file. Recall that at startup, any newly-launched instance first tries to locate and take over orphaned subdirectories left behind by spot instances that terminated for whatever reason (usually because spot prices rose above the limit price we set). Those orphaned subdirectories have names corresponding to the instance-id's of the instances that were running in them (these names begin with i- followed by either 8 or 17 hexadecimal digits). If a suitable orphaned subdirectory is found, the newly-launched instance renames it to its own instance-id; if no orphaned subdirectories are found, then the newly-launched instance simply creates a new subdirectory whose name is its own instance-id. That newly created subdirectory is created as a child of the parent "instance type" directory. If there is only one "instance type" directory (e.g. c4.large), then it will be created there; however if there are several to choose from (e.g., c4.large-doublechecking, c4.large-LL, ecm/c4.large, etc.) then one will be picked, but it might not be the one you want. Creating dummy "i-" subdirectories lets you control which "instance type" parent directory is used for which number of instances, thus allocating amounts of work among different work types. If you wish, you can even seed the dummy subdirectories with worktodo.txt files containing lines copied and pasted from the http://www.mersenne.org/manual_assignment/ page. The subdirectories only need the worktodo.txt files, all the other files (configuration and executable) will get copied automatically. If the worktodo.txt file is missing, then mprime will request assignments and receive random exponents. Next section: Setting up an EFS filesystem: terminating the instance you created Last fiddled with by GP2 on 2017-11-18 at 02:00 Reason: symbolic link "prog" instead of "p95"; increase Memory to less conservative amounts; v 29.4b5; AffinityScramble2 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
How-to guide for running LL tests on Google Compute Engine cloud | GP2 | Cloud Computing | 4 | 2020-08-03 11:21 |
Is it possible to disable benchmarking while torture tests are running? | ZFR | Software | 4 | 2018-02-02 20:18 |
Amazon Cloud Outrage | kladner | Science & Technology | 7 | 2017-03-02 14:18 |
running single tests fast | dragonbud20 | Information & Answers | 12 | 2015-09-26 21:40 |
LL tests running at different speeds | GARYP166 | Information & Answers | 11 | 2009-07-13 19:39 |