Volunteer Computing

by Bradley Knockel (last modified July 2024)

Let me tell you about volunteer computing. I had some old (and newer) laptops and mobile devices lying around, and I now use them to cure diseases, find black holes, and solve difficult math problems. But I don't do a thing! I just install Folding@home and/or BOINC, which take care of most everything for me! Your computer has both a CPU and GPU, and this software uses them to do calculations, then it uses an internet connection to send back the results. By default, calculations will only run when plugged in (not on battery power) and will not use mobile data (only WiFi or Ethernet). Processes are run with low priority to cause minimal interference with computer performance, and several settings exist to optionally restrict the software further.

In 2019, the Summit supercomputer was built (using RISC CPUs and Nvidia-Tesla GPUs) becoming the world's most powerful supercomputer. Folding@home surpassed the power of Summit during the COVID-19 outbreak! BOINC was never as popular. Combining the efforts of many computers is called distributed computing, and, now that there are vast numbers of devices sitting around, let's put them to use! Getting access to supercomputers is difficult due to high demand, especially now that CPUs are no longer obeying Moore's law, so let's meet the demand!

If you have any iOS device (iPhone or iPad), you cannot use Folding@home or BOINC. Same for a Chromebook, though there are rumors of tricky ways to get BOINC to work on a Chromebook (good luck). BOINC runs on Android though! By the way, most tech people prefer Android and real laptops over iOS and Chromebooks.

x86-64 (Intel and AMD) CPUs are the present, but RISC (ARM) CPUs are the future. There seems to currently be support for ARM, but using GPUs that only come with ARM (such as Adreno GPUs and Apple's M series GPUs) is not popular at all. I currently recommend BOINC when using ARM devices, though even BOINC does not yet support Windows on ARM. In this webpage, the only ARM devices I discuss in detail are Android devices and the Linux-on-ARM (such as the Raspberry Pi).

Some new fancy computers such as macOS on Apple Silicon or certain Gen-12-or-higher Intel processors have both performance and efficiency CPU cores inside the same processor. I believe the following paragraph to be generally true, but I am not certain, and please skip this paragraph if you do not care! The internet is certain that Apple-Silicon macOS devices will only run either power or efficiency cores for volunteer computing (so you should reduce the number of CPUs that volunteer computing uses accordingly), but the internet cannot seem to agree on whether efficiency or power are the type that is used (perhaps it depends on the volunteer computing project), though it seems to be the efficiency cores. Windows 10 and not-updated Linux kernels will not correctly utilize Intel's performance cores, but the internet says that Windows 11 should work, though I still cannot consistently get more than efficiency cores to be used by volunteer computing (probably depends on the exact work sent by the project), perhaps made more complicated by how mixing efficiency and power cores for multithreaded processes is not ideal. However, Intel power cores can be used by turning off efficiency cores (perhaps called Atom cores) via the BIOS or UEFI. Another very temporary solution is to change each task's priority or affinity as they run. On Apple-Silicon devices, the efficiency cores are the first ones listed in Activity Monitor's CPU-Usage feature (under View) just like any other ARM device. However, Intel lists any efficiency cores last (ordering can be viewed via Window's Resource Manager). As always, Intel power cores (which are hyper-threaded!) are ordered in pairs: CPU0 and CPU1 both run on the first physical power core. Efficiency cores are not hyper-threaded, and neither are any Apple-Silicon cores.

Folding@home

Folding@home software is very easy to use, it goes to a very important cause (curing, preventing, and treating diseases), and good scientific progress is being made. I highly recommend you do this before trying BOINC. Here is some basic info on what it does.

Folding@home is great for any desktop or laptop, though GPUs won't yet run on macOS or ARM. In your computer's power settings, have it never go to sleep when plugged in (depending on your operating system, you might need to keep your laptop lid open when it charges). By default, Folding@home will not run on battery power, which is great.

I found the documentation to be lacking and rarely updated. I have also found the software itself to be somewhat lacking. Below are some technical notes I have found to be quite useful.

Costs of volunteer computing

By far, the things that get damaged the most on a computer are the mechanical parts (keyboards, fans, hard disks, etc.) or are the results of mechanical damage (dropping, spilling, etc.). Batteries can only do so many charge cycles, but, by default, volunteer computing does not use the battery. SSDs (unlike HDDs) are not mechanical, but they also can only do so many writes, but nearly all (all?) current projects have a low write rate to storage drive. The electrical logic circuits are typically just fine when a computer reaches its end of life, especially if you use them safely, so why not put them to use?

At first, I was of the mindset that, since computers use a good amount of electrical power while idling, the most efficient choice was to run the computer as hard as possible without reaching high temperatures or noticeably affecting usability. I then started noticing that all of my devices got more work done per core when I ran fewer CPU cores. I believe this to be caused by resource sharing between the CPU cores...
- All cores have to communicate with the same RAM (and sometimes cores will share an L2 cache).
- For low-end AMD CPUs, a single FPU might be shared between cores (most CPUs have an FPU for each core).
- For many x86-64 CPUs, hyper-threading causes two threads to fight for resources in a single physical core.
Depending on the work, sharing can be great because not all threads need to be using the exact same resource at the same time. But some work will want to use the RAM or FPU as much as possible. In addition, more power per calculation is needed...
• Running extra cores can cause fans to turn on that are now drawing extra power.
• A processor at a higher temperature draws extra power due to more leakage current.
• The computer trying to coordinate all the sharing also uses more power.
I now try to err on the side of conserving computational resources, especially when I consider how some BOINC applications always want the same work to be completed by at least two people. Why would I risk harm to my bank account to get a 10% increase in total work output when the code and project administration may have any number of inefficiencies that could be increasing run time by 100% or even 1000%! For BOINC, running fewer tasks at a time means finishing individual tasks faster, using less RAM, and losing less progress when BOINC is restarted.

Certainly never over-volt your processor! Higher voltages drastically increase power usage. I also recommend turning off any CPU turbo mode (usually done in BIOS or UEFI) because increasing the frequency requires more voltage. A CPU's frequency typically increases as demand for work increases. However, when more cores are used, laptops with a lot of cores will often decrease the frequency due to electrical or thermal limitations, but I would still always turn off any turbo mode. Other than turning off any turbo mode, I try not to worry about the details and just let the computer make its own decisions.

We need to experiment on each computer for different types of work while monitoring CPU usage and temperature. For Folding@home, do this by timing the time between checkpoints for a given work unit to see if it scales as expected. For BOINC, I run a bunch of tasks, then average together ones from the same application. Note that, for BOINC, "run time" will not be much larger than "CPU time" when resource sharing is occurring (these will only differ by a lot when you are running more threads than logical cores). For resource sharing, "run time" and "CPU time" both increase, though I have noticed that sometimes "run time" increases a bit more than "CPU time".

The main idea of volunteer computing is to put your current devices to use, but, if you really like volunteer computing and want to buy a machine for it, get computationally-powerful energy-efficient Nvidia GPUs! As ARM becomes more popular, we can hopefully soon get an ARM workstation (Windows or Linux) with a ton of ARM cores or a cluster of single-board computers that can run Linux! Actually, Nvidia Jetson can already do this! Nvidia on ARM will hopefully become more popular and be supported by Folding@home and BOINC projects soon. However, if you buy a device for volunteer computing, keep in mind that projects don't owe you anything. They never asked you to spend your own money and can shut down whenever they want. Also, keep in mind that using your current devices is, ignoring the electricity cost, free. Finally, keep in mind that your expensive machine, while perhaps being more energy efficient, might be much cheaper in a few years. Yes, the time value of computation is more today than tomorrow, but I think the real solution is getting more people into volunteer computing.

To get new people interesting in volunteer computing, focus on people who (1) have or want to develop problem-solving skills and attention to detail, (2) have time and willingness to volunteer, and (3) have computational resources or might get them soon. I used to just focus on people with computational resources and get frustrated, but I now understand that much more is needed from a person! Some gamers have a computer that is too powerful in that it just overheats half of their home regardless of the month, though my answer to that is to use BOINC's daily schedules!

As for when to recycle old devices, I would say to retire them when they stop bringing you joy. I was given several half-broken old computers because people knew that I did volunteer computing. It was fun at first to have lots of computers running, but the computers were ancient (had hard drives, only 4 GB RAM, and no OpenCL preventing their GPUs from being used) and their battery and keyboard problems made them frustrating to maintain. I recycled them and my electricity usage went down a lot at little cost to the number of tasks I was completing. My 2012 i5 laptop will be the next to be retired. Compared to a 2021 i5 laptop that my work lets me use, it uses 67% more electricity to do 13% of the GPU output and the same amount of total CPU output. I use a Kill-A-Watt to measure electrical power. For now, I am turning it off over the summers.

Besides electricity, I suppose the other main cost is time. I recommend checking up on devices daily to make sure they are running (I quickly do it via bookmarked BOINC-project webpages). I personally have greatly enjoyed the time I have spent learning about computers, but then there is the time being annoyed. For example, on some newer laptops, after initially thinking that GPUs were overheating causing the computer to go to sleep (this is wrong because overheating would cause the computer to shut down rather than sleep), I figured out that the laptops go to sleep if the power goes out for a few seconds when the monitor is off and screen is locked (even though Windows 10 or 11 is set to never go to sleep on battery), so I eventually went into the BIOS and set it to Block Sleep (this is dangerous for your battery if you ever unplug your laptop!), but I cannot find any information anywhere why I must do this or why only certain laptops do this. How annoying! Certain projects having incompetent administrators is annoying. Overconfident know-it-alls on the forums are annoying. Employers blocking volunteer computing is annoying. Microsoft Windows not allowing me to do what I want can be annoying. Everything worthwhile in life has a cost, and I enjoy science and messing around with computers, so I am happy being a bit annoyed!

BOINC projects

Folding@home will not run on my Intel GPUs (though it once ran or maybe still runs on some Intel GPUs). GPUs are very fast, so I very much want to use them! Folding@home can utilize vastly-more-powerful-than-CPU GPUs, so I would prefer my CPU power go to projects that are incompatible with GPUs. Note that CPU-only tasks still exist for projects that are compatible with GPUs mainly due to the projects wanting to appease the emotional component of people wanting to contribute, though the projects do at least put the smallest proteins to CPU-only tasks (if Folding@home was cool, instead of doing CPU-only, they would allow Intel GPUs to be used and use them to run work that would have gone to CPU!). BOINC allows us to run Intel GPUs and various CPU-only projects! After using Folding@home for a couple years, I have now completely switched to BOINC because I don't have the fancy GPUs that Folding@home wants. If I were to ever get fancy GPUs, I would install Folding@home with my Internet off so that I could disable the CPU slot before it got any work, then I would let BOINC run my CPUs.

Also, Folding@home currently only runs on computers, so my Android phone cannot be used! Since this ARM CPU is extremely efficient, I very much want to use it! BOINC can use ARM CPUs!

Folding@home is designed to get the most from the fastest GPUs. BOINC on the other hand has a million different projects, options, and possibilities. BOINC is the Wild West of volunteer computing! There are many BOINC projects because BOINC is software that anyone can use to create a project. Do to the freedom of BOINC, I always do research to make sure that a project is not a waste of time, and I never do ones with "Independent" as their listed sponsor at the above link. Also, unlike Folding@home, some BOINC projects or sub-projects can require a lot of RAM or RAM speed.

The projects listed as independent at the above link do not have a university or corporate sponsor. While it may seem that supporting non-sponsored projects is "helping the underdog", corporations and universities have standards for both the research and the experts employed to do the research. This sponsorship mostly guarantees that your time is not being wasted on a project that has either or both of (1) bad code or analysis or (2) code that could be sped up over 50 times (the now-discontinued Collatz project for example). Also, sponsored projects have a means of using or publishing their results, whereas the results of independent projects may be gone when their project's website is deleted some years later.

Currently, there are only 2 non-Independent projects that can use (Gen7 or newer) Intel GPUs: Einstein@home and NumberFields@home. Even though it isn't listed as a GPU project at the main BOINC project webpage, World Community Grid has an OpenPandemics project that will send work to Intel GPUs, but these tasks generate more heat than my laptops can handle, and getting GPU tasks is difficult because OpenPandemics already has all the GPU help they need.
I like and contribute to Einstein@home, but it currently only has "opencl-intel_gpu" application versions for Windows and Linux (macOS is only for 32-bit computers), and you often have to allow beta applications (test applications) to get any work at all on Intel GPUs.
NumberFields@home only has "opencl-intel_gpu" application versions for Windows and Linux (you currently have to allow test applications in the project's settings because the Intel GPU platform is currently set as beta), but the tasks don't actually start on my Intel GPUs even though they seem to be working. More importantly, I wouldn't recommend that anybody use any GPU on this project because its CPU tasks are far more efficient. At Einstein@home, an AMD GPU of mine has about 10 times more computational power than a single CPU core, but NumberFields@home tasks actually take 3 times longer on that GPU compared to a CPU core (both tasks do the identical amount of work), so I would be an idiot to put this (or any) GPU to NumberFields@home instead of Einstein@home. Most people seem to have figured this out (the default platforms are the CPU-only ones).
World Community Grid also "has work" for Intel GPUs, but you'll wait a week before you get any because there isn't enough GPU work to go around, so I'd rather not make the problem worse by contributing my GPU to that project (also, the GPU tasks would end with an error after about 12 hours on a couple of my GPUs).

There are many options for my ARM devices. I am only interested in projects that are CPU-only. I would rather my CPU power go to projects that cannot use GPUs. Even so, there are still plenty of projects to choose from. In general, I prefer Rosetta@home and World Community Grid (WCG) because they help treat and cure diseases. I must warn that Rosetta@home can use a lot of RAM, which, if your computer starts swapping memory to your storage drive, will hurt computer performance and will slightly reduce the life of your storage drive.

2022-10 Updates: WCG has been having various issues since Krembil took over in February. Also, Rosetta@home has never provided very consistent work. Rosetta@home will release batches from time to time, but them running out of work tells me that I could better be used elsewhere where I'm needed because they already have all the help they need. Because I really like WCG and want to run their Mapping Cancer Markers tasks, I have chosen to deal with their bugs, which requires me to run these 6-day-deadline tasks by storing 1+0.5 days of work to try to prevent work for my computers from running out (keep in mind that some Mapping Cancer Marker tasks have a 3-day deadline). When work runs out on a device, BOINC no longer frequently checks for more work, which causes my computers to sit idle, even though, with WCG, you can usually get tasks if you manually keep trying to get updates for several minutes in a row. I normally hate downloading a lot of work because the risk of not finishing it on time increases (I hate not finishing on time), and I am now risking not finishing many tasks, but I think being able to run WCG is worth it. I check the WCG Results webpage daily to make sure no devices have lost WiFi or shut down (I would check the Devices webpage, but a Krembil bug is that new devices take about a month to show up here; and I would just check my GPU project instead, but some of my GPUs don't have OpenCL). My GPU project (Einstein@home) also then stores more work unless I set its Resource Share to 0. Note that, when attaching a new device to WCG (or when switching projects within WCG), BOINC does not yet know how quickly the device can do WCG work, so you may download far more work than you can handle, so wait until predicted times match actual times to increase the amount of stored work for a new device. Also note that, due to poor design by WCG, whenever Mapping Cancer Markers work runs out on a computer, a large file is deleted and must be downloaded again when new work is eventually sent, which stresses the WCG servers.

To use my Android phone while it charges overnight, I prefer WCG. Upon suspending when I unplug my phone, only a small amount of work is lost due to tasks making frequent checkpoints (WCG has multiple checkpoints and short runtime). Rosetta@home uses a lot of RAM. When there are no available WCG Android tasks, I recommend Universe@home (set Universe@home "Resource share" to 0 to only get tasks when no work is available from any non-zero projects). 2024-01 update: the lead of Universe@home died, but a new lead is taking over, so there will be no work until things get stabilized.

To use my Raspberry Pi, I need projects that support "Linux on ARM", a category that includes other single-board computers similar to the Raspberry Pi. I will assume Raspberry Pi OS is your OS. I also recommend Universe@home (set Universe@home "Resource share" to 0 to only get tasks when no work is available from any non-zero projects). My Pi 3 only has 1 GB of RAM, so I cannot do projects like Rosetta@home (Rosetta is also only for 64-bit Linux on ARM). When I tried 64-bit Raspberry Pi OS, WCG and Universe@home stopped working, but this fixed it! In order to edit cc_config.xml, I did `sudo nano /var/lib/boinc/cc_config.xml`, then I had to run `sudo systemctl restart boinc-client`.

To use usual (x86-64) CPUs, WCG is wonderful! As a backup (that is, "Resource share" set to 0), Universe@home is great. Better yet, use TN-Grid when it has work available. It is not listed on the official BOINC project list, but it is listed here. Keep in mind that Universe@home runs much faster in Linux than in Windows (especially certain Linux distros), and running BOINC in a Linux virtual machine on Windows can be faster than BOINC directly on Windows! For WCG, Smash Childhood Cancer (SCC) tasks are about 50% faster on Linux, so, for Windows computers, I disable receiving SCC tasks. Also, the TN-Grid project is a bit faster on Linux.

If you have a professional GPU that can do native double-precision calculations very well, you may wish to use your GPU's double-precision (FP64) cores because some calculations require double precision and few GPUs are very good at double-precision. Folding@home only uses single-precision (FP32) on GPU. I believe that there are currently only one BOINC project that wants double-precision GPU calculations: Einstein@Home. Einstein@Home only uses double-precision for a small amount of time at the end of the task and, if the GPU does not support native double-precision calculations, will use the CPU. See my previous link to FAHBench to test your GPU's double-precision vs. single-precision capabilities, or just look it up.

Setting up BOINC

If you want to track your earned credits across projects, use the same email address for each project.

On your BOINC Manager, use the Advanced View (not the Simple View) for much more useful information and options!

BOINC projects offer many settings (the info at this link is great!). Some settings are computing preferences and others are project preferences.

I see no good reason to worry about setting computing preferences at project websites. Just set them uniquely on each computer using BOINC Manager...
  • For "Use at most __% of the CPUs" setting, I set this differently for each computer depending on computer's usage, temperature, and resource sharing such as hyper-threading. You simply have to experiment a bit with each computer to see how much work is done at what power and temperature cost. From what I can tell, when rounding a decimal, round the final digit up (for example, to run 5 out of 6 cores, enter 83.4, not 83.3).
  • I uncheck the "Suspend when non-BOINC CPU usage is above ___" option because BOINC runs at low priority and because I sometimes run Folding@home alongside. If I ever noticed an issue, I would use this setting. An alternate solution is setting daily schedules, which can also be handy to avoid peak-hour electricity charges (note that Folding@home does not have daily schedule settings).
  • On computers with nicer GPUs (even Intel GPUs), I do not need the "Suspend GPU computing when computer is in use" setting. On lesser computers, performance is noticeably affected by BOINC running the GPU.
  • If doing CPU-only tasks, set "Leave non-GPU tasks in memory while suspended" to prevent wasted work. Sadly, this setting does not exist on Android.
  • For BOINC memory usage, I want the "in use" to match the "not in use" so that I don't risk losing progress on tasks every time I quickly use a computer. I usually decrease "When computer is not in use, use at most ___% memory" to match the default "in use" setting of 50% (of total physical memory). If I had a computer with lots of RAM, I would set both values to 90%, especially if it's running Linux and rarely used by people. You really shouldn't be depending on these settings; your choice of projects and project preferences should not exceed your available RAM.
  • I want GPU tasks to have slightly higher CPU priority than CPU-only tasks so that the CPU is not limiting the GPU! The default settings on Windows are perfect: GPU tasks run at "below normal" priority, and BOINC and Folding@home CPU-only tasks run at "low" priority. (You can probably change this via the cc_config.xml file.) I also set Resource Share high on my GPU projects to ensure that there is always a GPU task running. You can also have BOINC not run 100% of the CPUs, but there is a better way of doing this (see my section on app_config.xml)!
  • BOINC switches between projects mid task when the deadlines are far away, which increases RAM usage if you have set "Leave non-GPU tasks in memory while suspended"! I truly don't understand the point of switching mid task, so I set "switch between tasks every" to 999999 minutes effectively stopping this behavior. However, BOINC will still switch mid task if a project can no longer feed all the CPU cores and another mutlithreading (mt) application wants to use all the cores for a single task. I wonder if BOINC will still switch mid task if a deadline is approaching, but I'm not one of those a-holes who grabs more than a day's work at a time, so I don't care to find out.

Project preferences must be set at each project's website...
  • For projects that can use my GPUs, I turn off CPU-only tasks.
  • For any project that will let me, I allow beta (test) tasks because I trust the projects to know what's best more than me, and I don't want to become one of those people who only cares about credits.
  • For some projects (the final section here has a list of projects), I have to change various settings to allow them to publicly export data, publicly display data, link devices, etc. This allows me to track my progress across projects and to let other people see basic info about my computers.
  • To halt a computer remotely, I reserve a location, specifically home, to have project settings that prevent computers from getting work (for me, changing computing settings won't do anything because I set each computer locally). Then, if I want to stop work remotely, I can simply move computers to home! The computer will finish current tasks, then that's it! Sadly, not all projects have settings that let you do this even if you are trying to be very clever.
  • To have a computer with more resources run extra sub-projects, I reserve a location, specifically work, to get these sub-projects. For example, for WCG, I have a computer that can do the Africa Rainfall Project, so I move that computer to work!
  • To allow a device to only run a project when other projects run out of work, I reserve a location, specifically school, to have "Resource share" set to 0. For example, Universe@home is a great backup project!

On Android, there are more considerations (in addition to those already mentioned such as there being no "Leave non-GPU tasks in memory while suspended" option)...
  • First of all, you need to install the app called "BOINC" made by U.C. Berkeley. Google Play Store has an old version, but you can download the newest .apk file from BOINC's website.
  • The app doesn't list all projects that support Android (especially older BOINC versions), but, once you add your first project, there's an "add project by URL" option.
  • ARM CPUs are very energy efficient, but a phone has no fans, and a case or other lack of airflow can cause your phone to overheat. BOINC will suspend before your phone has to turn off, but let's avoid this situation! By using the CPU-Z app, I have found that my phone's battery temperature is increased about 1 C° when using a thin case and another degree when I don't prop it up a bit to allow air get to its back (and I have never believed in using inductive charging). Also, I have found that it will be another degree cooler if you prop it up with the back facing up. Perhaps the best is to sit the phone flat on a metal table; I haven't measured it yet, but the phone always feels cool!
  • I don't recommend running BOINC on your currently used phone or tablet. After using my over-3-year-old Samsung phone to do over 1 year of running BOINC on 4 cores, which could be considered to be 4 years of single-core computation time (which took 1.5 years on the calendar because I sometimes unplugged my phone), the heat (de)activated glue on the sides of the back panel started to weaken. I believe this to be some combination of (1) the phone having a case, (2) me holding the phone with my warm hands while using the plugged-in phone with BOINC running, and (3) the sides of the phone not making good thermal contact with the metal table, especially through the case. This damage is only superficial (fixed using superglue), and I probably could have reduced BOINC's max temperature or reduced the cores run by BOINC, but why risk damage? Later, I was running 4 cores of a Samsung tablet for over 1.5 calendar years, and its glue for back panel started to weaken even though there was no case and it was propped up. It was in a hot room and not sitting with its back against a metal table (the next best would be to prop it up face down, but I had the face up). Superglue fixed it, but I no longer wish to run BOINC on phones or tablets that I still wish to use for other things.
  • A complication is that Android sometimes won't let a process use all CPU cores. I tested an old Kindle by comparing "run time" and "CPU time" on completed tasks, and I found that it used 2 of the 4 cores when run as background or as foreground (if I set BOINC to use more than 2 cores, the "run time" became significantly longer than "CPU time"). I tested my 4+4 core phone by using adb shell, and I found that, as a background process, only 4 little cores can be used. As a foreground process, 3 little cores and 4 big cores could be used (little cores are preferred when running less than 7 tasks, but an 8th task would hop between little and big cores). I tried a lot of apps to see details of CPU usage, and, on my non-rooted phone (not Kindle), CPU Monitor is the only one that "worked", though I quickly deleted it because it would always run in the background. I recommend to set your device to use the number of cores that will actually be working else (1) you will lose more work each time you unplug the device, (2) you will slow the project's processing of individual workunits, and (3) you will use more RAM. Interestingly, if set to 7 tasks as foreground (using 4 big cores), it seems to run at about the same battery temperature as just using 1 big core. From what I can tell, Android will throttle the big cores for various complicated reasons. As for running in foreground, you have to manually open BOINC before it automatically opens, and, based on a minimal amount of experimentation I did, it won't stay in foreground for very long.
  • A related complication is that BOINC can run as a foreground process if you open the app too quickly after the phone restarts. To easily get a sense for how long BOINC takes to start, have your phone in a "BOINC friendly" state (plugged in, over 90% charged, etc.), then restart your phone and wait for the little "Computing" notification to appear! On my phone set to run 4 of 8 cores, running in foreground causes 3 little cores and 1 big core to run instead of 4 little cores. Using a big core causes my battery temperature to go too high for my comfort.
  • Using the GPU (Mali or Adreno) is not an option for any known projects. Projects may never use GPUs due to heat.
  • So that I can check my phone without all my tasks going back to their most recent checkpoint, I set "Don't require screen off" (or uncheck "Pause computation when screen is on"), and I change my "max other-CPU" from 50% to 100%. These settings could be an issue if battery temperature goes over 40 °C, so I install the CPU-Z app to verify that battery temperature is safe.
  • On my Samsung phone, something called "Device care" keeps complaining. I just ignore it because it doesn't understand that BOINC will suspend if things get too hot, suspend if I unplug the device, and only run when battery is more than 90% charged.

Using a Raspberry Pi, there are some considerations...
  • Pis are great for using CPUs on BOINC! Pis are cheap, have very energy efficient ARM CPUs, automatically restart after any power outage, and runs BOINC immediately after a restart. For some projects, RAM can be a limitation, but the Pi 4 allows you to get up to 8 GB of RAM!
  • Try not to touch exposed parts of the Pi, especially when it's on! Electrostatic discharge once caused almost all my tasks from any project to eventually say "Error while computing" for days until I shut down and unplugged the Pi (though maybe a simple restart could have fixed it).
  • I recommend putting a heat sink on the Pi's processor, and orient the heatsink in a way to allow for vertical airflow when the Pi is placed on its side (you should place the Pi on its side!), but another orientation might be better if a fan is running horizontally like this. Even like this, running all cores will cause my Pi 3B to very slightly throttle the CPU depending on room temperature. To fix this, an area with a slight breeze reduces the temperature by about 10 C° (the same temperature drop as using 1 fewer core). Here is what the official Pi people have to say about temperatures. To measure CPU frequency for measuring throttling, use the `vcgencmd measure_clock arm` command.
  • To install BOINC, run the command `sudo apt install boinc` (perhaps via SSH!). You can then run boinccmd, run the usual GUI (perhaps via VNC!), or, better yet, run `boincmgr &` via `ssh -Y ubuntu@ubuntu.local` (the -Y enables trusted X11 forwarding).
  • The following numbers are from running the WCG project's OpenPandemics (no longer provides CPU-only tasks). The Pi 4B seems to be almost twice as fast as the 3B using a similar amount of electrical power! On my 3B, I sometimes have to run 3 of the 4 (75%) CPUs to prevent overheating, and I limit RAM usage to 60% of the 1 GB of physical memory. In the winter, I can run all 4 CPUs after increasing BOINC's RAM usage to 70%. The `top -o %MEM` command was useful for figuring this out!
  • Using the GPU will not likely be an option anytime soon. The main GPGPU interfaces are OpenCL (works on most GPUs), CUDA (only Nvidia GPUs), and now Apple's we-want-to-be-unique-and-not-work-with-anyone-else Metal. Anyway, OpenCL support for Pi certainly needs more work and may not bring much benefit.

app_config.xml for BOINC

You can create neat files called app_config.xml to do things like adjust gpu_usage and cpu_usage for GPU tasks! There is some great documentation on these files, and here is some necessary info when figuring out exactly where to put the files. Once you create the file, in BOINC Manager, do Options → Read Config Files. BOINC Manager may not immediately update some things, but the BOINC client is updated. I don't think Android can do this?

To maximize computation of GPU tasks, these files can adjust gpu_usage and cpu_usage. From what I can tell, when the numbers of several tasks don't add exactly to an integer, gpu_usage is conservative (0.33 will allow 3 tasks on a GPU), but cpu_usage is liberal (0.33 will allow 4 tasks on a single CPU core), but I'm not certain. You may want a gpu_usage less than 1.0 if your GPU can handle more tasks without burning up, but this can slow things down if the tasks fight, not just for computation time, but for GPU RAM. Instead of just reducing gpu_usage from 1.0, some projects have additional configuration files that you can use to better maximize computation!

I usually try to avoid making app_config.xml files, but here is an example of how I use them. When using my Intel GPUs at Einstein@home, the "Binary Radio Pulsar Search (Arecibo)" application only reserves 0.5 CPUs for each GPU task. This is strange because the other Intel-GPU application ("Gamma-ray pulsar binary search #1 on GPUs") reserves 1.0 CPU per task. Reserving less than 1.0 CPU causes 0.0 CPUs to be reserved, so a 4-core CPU will run 4 CPU-only tasks leaving no CPU power left to drive the GPU causing GPU tasks to take many times longer to finish. A fix is to create the following app_config.xml to reserve 1.0 CPU!

<app_config>
  <app>
    <name>einsteinbinary_BRP4</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>1.0</cpu_usage>
    </gpu_versions>
  </app>
  <app>
    <name>einsteinbinary_BRP4G</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>1.0</cpu_usage>
    </gpu_versions>
  </app>
  <app>
    <name>einsteinbinary_BRP7</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>1.0</cpu_usage>
    </gpu_versions>
  </app>
  <app>
    <name>hsgamma_FGRPB1G</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>1.0</cpu_usage>
    </gpu_versions>
  </app>
</app_config>

I found the app names here. An app name changed from BRP4 to BRP4G without any notice in 2021, so I have both in the above list because the project will suddenly go back. I put the file here: C:\ProgramData\BOINC\projects\einstein.phys.uwm.edu . Another advantage of doing this is that CPU-only tasks are no longer paused in the middle of work when a radio task finishes followed by a gamma-ray task (both applications now use the same amount number of CPUs). If I were running multiple GPU tasks at a time, I would not reserve 1.0 CPU for each because, for my tasks, "run time" is much larger than "CPU time", so each GPU task just needs a little bit of a CPU core. By the way, for radio tasks on my Intel GPUs, I might actually do better if I set both gpu_usage and cpu_usage to 0.5 because then the "run time" is slightly less than doubled (and two radio tasks would still use 1 CPU core), but I would rather err on the side of conserving my computer resources. Lastly, I didn't need to add the hsgamma_FGRPB1G app, but it helps to have all apps in the file if you ever want to change cpu_usage to another value.

On a computer with an especially weak Intel GPU, I needed to do the "Suspend GPU computing when computer is in use" setting. But, whenever the computer was in use, an extra unwanted CPU-only task would start running (to be suspended once the computer was idle again)! To prevent this annoying behavior, I reduced the "Use at most __% of the CPUs" to 75% (this computer had 4 logical cores), and I used the following app_config.xml (note that 0.0 cpu_usage doesn't work)...

<app_config>
  <app>
    <name>einsteinbinary_BRP4</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
  </app>
  <app>
    <name>einsteinbinary_BRP4G</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
  </app>
  <app>
    <name>einsteinbinary_BRP7</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
  </app>
  <app>
    <name>hsgamma_FGRPB1G</name>
    <max_concurrent>10</max_concurrent>
    <gpu_versions>
      <gpu_usage>1.0</gpu_usage>
      <cpu_usage>0.1</cpu_usage>
    </gpu_versions>
  </app>
</app_config>

If your CPU-only project is multithreaded (uses multiple CPU cores per task), I strongly recommend using this 0.1-CPU setup for Einstein@home to ensure that the CPU-only project sees the correct number of cores (one less than total). The 75% configuration is especially useful if your CPU does hyper-threading.

An important thing to say, especially when running GPU tasks, is to experiment for yourself on your own computer. I have read at Einstein@home forums that some people (especially those with a CPU that hyper-threads a single physical CPU core into 2 logical cores) need to reserve 2 logical CPU cores for their GPUs to get best performance. I have also read at these forums that, unlike most other GPU projects, GPU tasks at Einstein require fast memory. I was playing around with a Dell Inspiron 11 3185 (with an AMD A9-9420e processor and a memory upgrade to 8 GB), and, if I tried to run even a single CPU-only task (for WCG), the Einstein gravitational-wave GPU tasks would run several times slower. I decided to not run any CPU-only tasks! As for why this occurred, I asked the Einstein forums! They said my low-end AMD APU only has one FPU between two physical CPU cores and may also have limited bandwidth to the memory that is shared between the CPU and GPU.

Another use of app_config.xml would be to set max_concurrent for rosetta and minirosetta at the boinc.bakerlab.org_rosetta project (a CPU-only project) to prevent issues of using too much RAM on certain computers (I found the application names here). Better yet, just set project_max_concurrent for the entire project.

<app_config>
  <project_max_concurrent>2</project_max_concurrent>
</app_config>

Either way, you could then increase Rosetta@home's Resource Share to guarantee that this max becomes a minimum as well, so a fixed value (unless Rosetta runs out of tasks to give or you start a new project that gets all of BOINC's attention). Without using app_config.xml, I normally would never run two projects that are giving CPU-only tasks on the same device (or two GPU projects) because I hate how BOINC switches between projects mid-task based on long-term (weeks) processing time and taking deadlines into account. When BOINC switches mid-task, you either lose progress or fill up memory. I hate computers thinking for me like when I type "weather" into my browser and it takes me to "weather.com", which I did not type (though I must have typed it once before), so I have to start thinking about every damn thing I've ever typed and about how my computer is thinking about what I am thinking so that I can outmaneuver it (outmaneuver it by typing "weather " with an extra space or "weather" in another box), but the real fix is to go deep in browser settings and disable cruel autofill. I like control, which app_config.xml provides. Do exactly what I tell you, computer! But, for people who use computers imprecisely, BOINC defaults to reasonable things.

Other active projects!

The world is a big place with many projects!

If you want a chance to earn some money from distributed computing and don't mind gimmicks, maybe look into Charity Engine. This project makes distributed computing accessible to any ethical company and donates to charities.

If you like gambling on things that aren't worthwhile, maybe look into mining cryptocurrency. On one hand, you can not mine cryptocurrency and use your computers to help solve problems in the world that will never need to be solved again, or, on the other hand, you can waste money on electricity and ASICs helping the black market.