Problem with NaCl Client

We see that there is an issue with our NaCl Client (at, with donors seeing this error:

Warning: Unexpected response to AS assignment request: error,DB ERROR: IO error: log.leveldb/016519.ldb: Too many open files

The server is being overloaded and our sysadmin team will take a look at it when they get back on Monday.  Long term, it looks like it’s time for us to upgrade the server this is running on since it’s getting overloaded.  We have been moving to get new servers ready for this and so getting to that should happen in about a week (servers are here and the sysadmin team has been working to getting them ready for their new roles in FAH).

New Core tech update: OpenMM (GPU) and Gromacs (CPU)

We’ve been pushing hard to improve the performance of OpenMM, especially in OpenCL as it’s now used in Folding@home.  We’ve got some great news hot off of the presses.  These are the benchmarks described at  They’re using the very latest OpenMM code, what will be in OpenMM 6.3.  They’re using CUDA 6.5 and running on Titan X.  All numbers are in ns/day.

Benchmark Calculation    CUDA   OpenCL
Implicit, 2 fs 471 366
Implicit, 5 fs 684 589
Explicit-RF, 2 fs 305 265
Explicit-RF, 5 fs 508 460
Explicit-PME, 2 fs 161 164
Explicit-PME, 5 fs 318 354


We’re especially pleased with those OpenCL PME numbers.  OpenMM Lead Developer Peter Eastman has put a lot of work into that for this release, and it now is actually faster than CUDA (For the Titan X).  Curiously, that is not the case on GTX 980.  It’s still slower than CUDA there, although it comes a lot closer than it used to.

This will be spun into an updated Folding@home core.  The upshot for GPU donors is that PPD for that new core should increase, due to the expanded capabilities of the new code.

It’s important to stress that SMP/CPU donors aren’t left out of new performance (and therefore PPD) updates either: FAH Lead Developer Joseph Coffland has been working hard on a new Gromacs core and that should also see performance benefits, as we roll out AVX support for FAH.

Introducing Shukla Group@Illinois

Shukla group ( at University of Illinois at Urbana-Champaign has just configured new Folding@home servers (ds01[a-d], which would help us carry out exciting computational experiments in collaboration with the vibrant F@H community.

Before joining Illinois in January 2015, I was a post-doctoral fellow in Pande Lab, working on conformational change mechanism of proteins related to a variety of diseases including cancer, neurodegenerative & cardiovascular disorders. Some of the key results obtained using Folding@home resources on conformational change mechanisms of G-Protein Coupled Receptors and Kinases are highlighted in previous blog posts.

The mission of my group is to combine theory, computation, and experiments to develop quantitative models of biological phenomena relevant for health, energy and environmental challenges. These grand challenges would not only require new scientific methodologies and insights but also development of platforms that enable broader participation of the community of informed citizens in the pursuit of the solutions. Folding@home is one such unique platform that enables engagement with volunteers and donors to help us solve challenging scientific problems. Our group is excited to be a part of the Folding@home team and we look forward to working with all of you on projects related to key challenges in human health. Specific project details will be posted soon on folding forum and F@H blog.

Shukla Group

Multi-core CPU jobs

We’ve been getting reports that FAH is low on CPU jobs.  We’re in the process of adding more multi-core jobs to existing projects.

Also, currently lead developer Joseph Coffland’s main project is to get a new Gromacs CPU core out to enable some new science on CPU cores (that’s currently only easily doable on GPU cores).  We expect a rough ETA for the first testing of that new core to be in a few weeks.

Issue with

Likely due to a recent extremely heavy (and unusually rare) electrical storm, we’ve had some server issues last week.  I thought we’d gotten them all but we see now that there’s an issue with, which serves up both the Folding@home stats and project descriptions.  While the stats are being accumulated on a separate server, fah-web is the web server that displays them to donors.  I’ve taken a look at it and there’s a more serious issue with a particular server than I can take care of myself. I’ve filed a ticket with the sysadmin team. Best guess ETA on this being fixed is Monday at noon (assuming this is something simple).

The stats accounting is still going on and this appears to be just an issue with the web server (fah-web), so we expect that this resolution should be simple enough once our sysadmin team gets to this on Monday .

UPDATE Monday May 18 at 10am pacific time:  We’ve got the machine back up and everything is looking good.  It appears that the problem was the server was under heavy load post storm, leading to a PERQ reset under load, a longstanding issue, which causes filesystems fo go read-only or offline, with the start of the problem was May 16 19:21:52.  We’re planning on buying new hardware to help here, especially since this hardware is on the older side now.

Issues with and

We see the issues with and and are looking into it.

Fixes for recent FAH server outage

We recently ran into some problems with our assignment server (AS).  The AS is responsible for distributing the computational power of Folding@home by sending client’s to different work servers (WS), which in turn assign parts of the protein folding simulations to clients.  In the interest of transparency, here’s what happened.

Two issues compounded to cause some clients to not get work assignments for many hours.  The first problem is an issue we’ve run into before where the AS exceeds the number of open files allowed by the operating system.  When this happens it continues to run but fails to assign.  To address this problem, our lead developer (Joseph Coffland) has added code to the AS which will check the maximum allowed open files at startup and increase the limit to the highest possible value.  If the value is still too low it will print a warning to the log file.  This will help us ensure that our file limit settings are actually being respected.

The second issue was that failover to our second AS (assign2) didn’t work for some clients.  This was related to how we handle clients that cannot connect to port 8080 and WS that cannot receive connections on port 80.  The folding client will first attempt to connect to on port 8080 if this fails it will try on port 80.  The AS assumes that connections on port 80 are from clients which don’t support connections to 8080 and only assigns them to WS which support port 80.

In a failover situation, this assumption is invalid.  The result is far fewer WS are available during a failover.  To solve this problem the AS was modified to prefer rather than require WS which support port 80 for connections on port 80.  This change can cause client/WS port mismatches but only when no better match was possible.  Yes, it’s a tangled web.

In addition to these changes, we have plans to implement an early warning system which should help to alert us to such situations sooner.  We already get SMS notifications if the AS goes down but we need more thorough reporting for situations where the AS is alive but not assigning. This new notification system will be put in place in the next few months.

Thank you for your patience and for your ongoing contributions to Folding@home!

Two server issues being worked on

We have two server issues being worked on.  There were some issues with the main AS and with the server serving SMP WUs (  We’ll post updates as we have them.

New Maxwell WUs

A heads up for GPU donors: We’ve been looking into issues with Maxwell GPU project WUs shortages.  In addition to adding more WUs, we are also working to improve our monitoring tools.  With so many different variants of WUs, it’s easy for a specific sub-subtype (GPU WUs are a subtype, Maxwell a sub-substype) to run out if they are in high demand.

NaCl client stats update

Good news for NaCl donors: we’ve found an issue with one of the NaCl servers in Hong Kong that wasn’t awarding points (whereas the servers at Stanford have been), so we expect NaCl client donors to see some more points coming their way.

Add your computer's power to over 327,000 others that are helping us find cures to Alzheimer's, Huntington's, Parkinson's and many cancers ...

... in just 5 minutes.

Step 1.

Download protein folding simulation software called



Step 2.

Run the installation. The software will automatically start up and open a web browser with your control panel.

Step 3.

Follow the instructions to Start Folding.

Stanford University

will send your computer a folding problem to solve. When your first job is completed, your computer will swap the results for a new job.

Download the protein folding simulation software that fits your machine.


Installation guide
Or download Folding@home for your Android (4.4+) phone.