ALT Highlights – An Interview with Joelle Pineau

Welcome to ALT Highlights, a series of blog posts spotlighting various happenings at the recent conference ALT 2021, including plenary talks, tutorials, trends in learning theory, and more! To reach a broad audience, the series will be disseminated as guest posts on different blogs in machine learning and theoretical computer science. John has been kind enough to host the first post in the series. This initiative is organized by the Learning Theory Alliance, and overseen by Gautam Kamath. All posts in ALT Highlights are indexed on the official Learning Theory Alliance blog.

The first post is an interview with Joelle Pineau, by Michal Moshkovitz and Keziah Naggita.


We would like you to meet Dr. Joelle Pineau, an astounding leader in AI, based in Montreal, Canada.

Name: Joelle Pineau

Institutions: Joelle Pineau is a faculty member at Mila and an Associate Professor and William Dawson Scholar at the School of Computer Science at McGill University, where she co-directs the Reasoning and Learning Lab. She is a senior fellow of the Canadian Institute for Advanced Research (CIFAR), a co-managing director of Facebook AI Research, and the Montreal, Canada lab director. Learn more information about  Joelle here and her talk here.

Reinforcement Learning (RL)

How and why did you choose to work in reinforcement learning?   What are the things that inspired you to choose health as a domain of application for your RL work?

I started working in reinforcement learning at the beginning of my PhD  in robotics at CMU.  Quite honestly, I was delighted by the elegance of  the mathematical formulation.  It also had some link to topics I studied previously (in supervised learning & in operations search).   It was also useful for decision-making, which was complementary to state tracking & prediction, which was the topic studied by many other members of my lab at the time.

I started working on applications to health-care early in my career as a faculty at McGill.  I was curious to explore practical applications, and found some colleagues in health-care who had some interesting decision-making problems with the right characteristics.

How would you recommend a newcomer enter the RL field?  For RL researchers interested in safety, is there some literature you can recommend as a starting point?

Get familiar with the basic mathematical formalism & algorithm, try your hand at easy simulation cases.  For RL and safety, the literature is very small and quite recent, so it’s easy enough to get started.  Work on Constrained MDPs (Altman, 1999) is a good starting point.  See also the work on Seldonian RL, by Phil Tomas and colleagues.

In your talk you mentioned applications of RL to different domains. What do you think is the main achievement of RL? 

The AlphaGo result was very impressive!  Recently, the work on using RL to control the flight of the Loon balloons is also quite impressive.

What are the big open problems in RL? 

Efficient exploration continues to be a major challenge.  Stability of learning, even when the data is non-stationary (e.g. due to policy change), is also very important to address.  In my talk I also highlighted the importance of development methods for RL with responsible properties (safety, security, transparency, etc.) as a major open problem.

Collaborations

Based on your work in neurostimulation, it appears that people from different fields of expertise were involved. 

Yes, this was a close collaboration between researchers in CS (my own lab) and researchers in neuroscience, with expertise in electrophysiology.

What advice would you give researchers in finding interdisciplinary collaborators?

This collaboration was literally started by me picking up the phone and calling a colleague in neuroscience to propose the project.  I then wrote a grant proposal and obtained funding to start the project.  More generally, these days it’s actually very easy for researchers in machine learning to find interdisciplinary collaborators.  Giving talks, offering office hours, speaking to colleagues you meet in random events – I’ve had literally dozens of projects proposed to me in the last few years, from all sorts of disciplines.

What are some of the best ways to foster successful collaborations tackling work cutting across multiple disciplines?

Spend time understanding the problems from the point of view of your collaborator, and commit to solving *that* problem.  Don’t walk in with  your own hammer (or pre-selected set of techniques), and expect to find a problem to show-off your techniques. Genuine curiosity about the other field is very valuable!  Don’t hesitate to read the literature – don’t expect your collaborator to share all the needed knowledge.  Co-supervising a student together is also often an effective way of working closely together.

Academia, industry and everything in between 

During the talk, you mentioned variance in freedom of research for theoreticians in industry versus academia. Could you elaborate more about this? Are there certain personality traits or characteristics more likely to make someone more successful in academia versus industry?

For certain more theoretical work, it can be a long time until the impact and value of the work is realized.  This is perhaps harder to support in industry, which is better suited to appreciated shorter-term impact.  Another big difference is that in Academia, professors work closely with students and junior researchers, and should expect to dedicate a good amount of time and energy to training & developing them (even if it means the work might move along a bit slower).  In industry, a researcher will most often work with more senior researchers, and the project is likely to move along faster (also because no one is taking or teaching courses).

How do you balance leadership, for example, at FAIR, with students advising like at McGill, research [CIFAIR, FAIR, McGill, Mila], and personal life? 

It’s useful to have clarity about your priorities.  Don’t let other people dictate what these are – you should decide for yourself.  And then spend your time according to this.  I enjoy my work at FAIR a lot, I also really enjoy spending time with my grad students at McGill/Mila, and of course I really enjoy time with my family & friends.  So I try to keep a good balance between all of this. I also try to be clear & transparent with other people about my availability & priorities, so they can plan accordingly.

What do you think distinguishes the mindset of an extraordinary researcher?

To be a strong researcher, it helps to be very curious, genuinely want to understand and find out new knowledge. The ability to find new connections between ideas, concepts, is also useful.  For scientific research, you also need discipline and good methodology, and a commitment to deep understanding (rather than “proving” whatever hypothesis you hold).   Frankly, I also don’t think we need to further cultivate the myth of the “extraordinary researcher”.  Research is primarily a collective institution, where many people contribute, in ways small and big, and it is through this collective work that we achieve big discoveries and breakthroughs!

What is the Right Response to Employer Misbehavior in Research?

I enjoyed my conversations with Timnit when she was in the MSR-NYC lab, so her situation has been on my mind throughout NeurIPS.

Piecing together what happened second-hand is always tricky, but Jeff Dean’s account and Timnit’s agree on a basic outline. Timnit and others wrote a paper for FAccT which was approved for submission by the normal internal review process, then later unapproved. Timnit threatened to leave unless various details about this unapproval were clarified. Google then declared her resigned.

The definition of resign makes it clear an employee does it, not an employer. Since that apparently never happened, this is a mischaracterized firing. It also seems quite credible that the unapproval process was highly unusual based on various reactions I’ve seen and my personal expectations of what researchers would typically tolerate.

This frankly looks bad to me and quite a number of other people. Aside from the plain facts, this is also consistent with racism and/or sexism given the roles of those involved. Google itself now faces a substantial rebellion amongst employees.

However, I worry about consequences to some of these reactions.

  1. Some people suggest not reviewing papers from Google-based researchers. As a personal decision, this is making a program chair’s difficult job harder. As a communal decision, this would devastate the community since a substantial fraction are employed at Google. These people did not make this decision and many actively support Timnit there (at some risk to their job) so a mass-punishment approach seems deeply counterproductive.
  2. Others have suggested that Google should not be a sponsor at major machine learning conferences. Since all of these are run as nonprofits, the lost grants will either be made up by increasing costs for everyone or reducing grants to students and diversity sponsorship. Reduced grants in particular seem deeply counterproductive.
  3. Some have suggested that all industry research in general is bad. Industrial research varies substantially from place to place, perhaps much more so than in academia. As an example, Microsoft Research has no similar internal review process for publications. Overall, the stereotyping inherent in this view makes me uncomfortable and there are some real advantages to working in industry in terms of ability to concentrate on research or effecting real change.

It’s critical to understand that the strength of the research community is incredibly valuable to the community. It’s not hard to imagine a different arrangement where all industrial research is proprietary, with only a few major companies operating competitive internal research teams. This sort of structure exists in some other fields, often to the detriment of anyone other than a major company. Researchers at those companies can’t as easily switch jobs and researchers outside of those companies may lack the context to even contribute to the state of the art. The field itself progresses slower and in a more secretive way due to lack of sharing. Anticommunal acts based on mass ostracization or abandonment could shift our structure from the current relatively happy equilibrium where people from all over can participate, learn, and contribute towards a much worse situation.

This is not to say that there are no consequences. The substantial natural consequences of a significant moral-impacting event will play out regardless of anything else. The marketplace for top researchers is quite competitive so for many of them uncertainty about the feasibility of publication, the disposition and competence of senior leadership, or constraints on topics tips the balance towards other offers. That may be severe this year, since this all blew up as the recruiting season was launching and I expect it to last over many years unless some significant action is taken. In this sense, I expect all the competitors may be looking forward to recruiting more than they were previously and the cost of not resolving the conflict here in a better way may be much, much higher than just about any other course of action. This is not particularly hypothetical—I saw it play out over the years after the silicon valley lab was cut as the brain drain of other great researchers in competitive areas was severe for several years afterwards.

I don’t think a general answer to the starting question is possible, since it will always depend on circumstances. Even this instance is complex with actions that could cause unintuitive adverse impacts on unanticipated parts of our community or damage the community as a whole. I personally hope that the considerable natural consequences here form a substantial deterrent to misbehavior in the long term. Please think this through when considering your actions here.

Edits: tweaked conclusion wording a bit with advice from reshamas.

Experiments with the ICML 2020 Peer-Review Process

This post is cross-listed on the CMU ML blog.

The International Conference on Machine Learning (ICML) is a flagship machine learning conference that in 2020 received 4,990 submissions and managed a pool of 3,931 reviewers and area chairs. Given that the stakes in the review process are high — the careers of researchers are often significantly affected by the publications in top venues — we decided to scrutinize several components of the peer-review process in a series of experiments. Specifically, in conjunction with the ICML 2020 conference, we performed three experiments that target: resubmission policies, management of reviewer discussions, and reviewer recruiting. In this post, we summarize the results of these studies.

Resubmission Bias

Motivation. Several leading ML and AI conferences have recently started requiring authors to declare previous submission history of their papers. In part, such measures are taken to reduce the load on reviewers by discouraging resubmissions without substantial changes. However, this requirement poses a risk of bias in reviewers’ evaluations.

Research question. Do reviewers get biased when they know that the paper they are reviewing was previously rejected from a similar venue?

Procedure. We organized an auxiliary conference review process with 134 junior reviewers from 5 top US schools and 19 papers from various areas of ML. We assigned participants 1 paper each and asked them to review the paper as if it was submitted to ICML. Unbeknown to participants, we allocated them to a test or control condition uniformly at random:

Control. Participants review the papers as usual.

Test. Before reading the paper, participants are told that the paper they review is a resubmission.

Hypothesis. We expect that if the bias is present, reviewers in the test condition should be harsher than in the control. 

Key findings. Reviewers give almost one point lower score (95% Confidence Interval: [0.24, 1.30]) on a 10-point Likert item for the overall evaluation of a paper when they are told that a paper is a resubmission. In terms of narrower review criteria, reviewers tend to underrate “Paper Quality” the most.

Implications. Conference organizers need to evaluate a trade-off between envisaged benefits such as the hypothetical reduction in the number of submissions and the potential unfairness introduced to the process by the resubmission bias. One option to reduce the bias is to postpone the moment in which the resubmission signal is revealed until after the initial reviews are submitted. This finding must also be accounted for when deciding whether the reviews of rejected papers should be publicly available on systems like openreview.net and others. 

Details. http://arxiv.org/abs/2011.14646

Herding Effects in Discussions

Motivation. Past research on human decision making shows that group discussion is susceptible to various biases related to social influence. For instance, it is documented that the decision of a group may be biased towards the opinion of the group member who proposes the solution first. We call this effect herding and note that, in peer review, herding (if present) may result in undesirable artifacts in decisions as different area chairs use different strategies to select the discussion initiator.

Research question. Conditioned on a set of reviewers who actively participate in a discussion of a paper, does the final decision of the paper depend on the order in which reviewers join the discussion?

Procedure. We performed a randomized controlled trial on herding in ICML 2020 discussions that involved about 1,500 papers and 2,000 reviewers. In peer review, the discussion takes place after the reviewers submit their initial reviews, so we know prior opinions of reviewers about the papers. With this information, we split a subset of ICML papers into two groups uniformly at random and applied different discussion-management strategies to them: 

Positive Group. First ask the most positive reviewer to start the discussion, then later ask the most negative reviewer to contribute to the discussion.

Negative Group. First ask the most negative reviewer to start the discussion, then later ask the most positive reviewer to contribute to the discussion.

Hypothesis. The only difference between the strategies is the order in which reviewers are supposed to join the discussion. Hence, if the herding is absent, the strategies will not impact submissions from the two groups disproportionately. However, if the herding is present, we expect that the difference in the order will introduce a difference in the acceptance rates across the two groups of papers.

Key findings. The analysis of outcomes of approximately 1,500 papers does not reveal a statistically significant difference in acceptance rates between the two groups of papers. Hence, we find no evidence of herding in the discussion phase of peer review.

Implications. Regarding the concern of herding which is found to occur in other applications involving people, discussion in peer review does not seem to be susceptible to this effect and hence no specific measures to counteract herding in peer-review discussions are needed.

Details. https://arxiv.org/abs/2011.15083

Novice Reviewer Recruiting

Motivation.  A surge in the number of submissions received by leading ML and  AI conferences has challenged the sustainability of the review process by increasing the burden on the pool of qualified reviewers. Leading conferences have been addressing the issue by relaxing the seniority bar for reviewers and inviting very junior researchers with limited or no publication history, but there is mixed evidence regarding the impact of such interventions on the quality of reviews. 

Research question. Can very junior reviewers be recruited and guided such that they enlarge the reviewer pool of leading ML and AI conferences without compromising the quality of the process?

Procedure. We implemented a twofold approach towards managing novice reviewers:

Selection. We evaluated reviews written in the aforementioned auxiliary conference review process involving 134 junior reviewers, and invited 52 of these reviewers who produced the strongest reviews to join the reviewer pool of ICML 2020. Most of these 52 “experimental” reviewers come from the population not considered by the conventional way of reviewer recruiting used in ICML 2020.

Mentoring. In the actual conference, we provided these experimental reviewers with a senior researcher as a point of contact who offered additional mentoring.

Hypothesis. If our approach allows to bring strong reviewers to the pool, we expect experimental reviewers to perform at least as good as reviewers from the main pool on various metrics, including the quality of reviews as rated by area chairs.

Key findings. A combination of the selection and mentoring mechanisms results in reviews of at least comparable and on some metrics even higher-rated quality as compared to the conventional pool of reviews: 30% of reviews written by the experimental reviewers exceeded the expectations of area chairs (compared to only 14% for the main pool).

Implications. The experiment received positive feedback from participants who appreciated the opportunity to become a reviewer in ICML 2020 and from authors of papers used in the auxiliary review process who received a set of useful reviews without submitting to a real conference. Hence, we believe that a promising direction is to replicate the experiment at a larger scale and evaluate the benefits of each component of our approach.

Details. http://arxiv.org/abs/2011.15050

Conclusion

All in all, the experiments we conducted in ICML 2020 reveal some useful and actionable insights about the peer-review process. We hope that some of these ideas will help to design a better peer-review pipeline in future conferences.

We thank ICML area chairs, reviewers, and authors for their tremendous efforts. We would also like to thank the Microsoft Conference Management Toolkit (CMT) team for their continuous support and implementation of features necessary to run these experiments, the authors of papers contributed to the auxiliary review process for their responsiveness, and participants of the resubmission bias experiment for their enthusiasm. Finally, we thank Ed Kennedy and Devendra Chaplot for their help with designing and executing the experiments.

The post is based on the works by Ivan Stelmakh, Nihar B. Shah, Aarti Singh, Hal Daumé III, and Charvi Rastogi.

HOMER: Provable Exploration in Reinforcement Learning

Last week at ICML 2020, Mikael HenaffAkshay KrishnamurthyJohn Langford and I had a paper on a new reinforcement learning (RL) algorithm that solves three key problems in RL: (i) global exploration, (ii) decoding latent dynamics, and (iii) optimizing a given reward function. Our ICML poster is here.

The paper is a bit mathematically heavy in nature so this post is an attempt to distill the key findings. We will also be following up soon with a new codebase release (more on it later).

Rich-observation RL landscape

Consider the combination lock problem shown below. The agent starts in the state s1a or s1b with equal probability. After taking h-1 actions, the agent will be in either state sha, shb, or shc. The agent can take 10 different actions. The agent observes a high-dimensional observation (focus circle) instead of the underlying state which is latent. There is a big treasure chest that one can get after taking 100 actions. We view the states with subscript “a” or “b” as “good states” and one with subscript “c” as “bad states”. You can reach the treasure chest at the end only if you remain in good states. If you reach any bad state, then you can never make it to the treasure chest.

The environment makes it difficult to reach the big treasure chest in three ways. First, the environmental dynamics are such that if you are in good states, then only 1 out of 10 possible actions will let you reach the two good states at the next time step with equal probability (the good action changes from state to state). Every other action in good states and all actions in bad states put you into bad states at the next time step, from which it is impossible to recover. Second, it misleads myopic agents by giving a small bonus for transitioning from a good state to a bad state (small treasure chest). This means that a locally optimal policy is transitions to one of the bad states as quickly as possible. Third, the agent never directly observes which state it is in. Instead, it receives a high-dimensional, noisy observation from which it must decode the true underlying state.

It is easy to see that if we take actions uniformly at random, then the probability of reaching the big treasure chest at the end is 1/10100. The number 10100 is called Googol and is larger than the current estimate of number of elementary particles in the universe. Furthermore, since transitions are stochastic one can show that no fixed sequence of actions performs well either.

A key aspect of the rich-observation setting is that the agent receives observations instead of latent state. The observations are stochastically sampled from an infinitely large space conditioned on the state. However, observations are rich-enough to enable decoding the latent state which generates them.

What does provable RL mean?

A provable RL algorithm means that for any given numbers ed in (0, 1); we can learn an e-optimal policy with probability at least 1-d using a number of episodes which are polynomial in relevant quantities (state size, horizon, action space, 1/e, 1/d, etc.). By e-optimal policy we mean a policy whose value (expected total return) is at most e less than the optimal return.

Thus, a provable RL algorithm is capable of learning a close to optimal policy with high probability (where the word high and close can be made arbitrarily more refined), provided the assumptions it makes are satisfied.

Why should I care if my algorithm is provable?

There are two main advantages of being able to show your algorithm is provable:

  1. We can only test an algorithm on a finite number of environments (in practice somewhere between 1 and 20). Without guarantees, we don’t know how they will behave in a new environment. This matters especially if failure in a new environment can result in high real-world costs (e.g., in health or financial domains).
  2. If a provable algorithm fails to consistently give the desired result, this can be attributed to failure of at least one of its assumptions. A developer can then look at the assumptions and try to determine which ones are violated, and either intervene to fix them or determine that the algorithm is not appropriate for the problem.

HOMER

Our algorithm addresses what is known as the Block MDP setting. In this setting, a small number of discrete states generates a potentially infinite number of high dimensional observations.

For each time step, HOMER learns a state decoder function, and a set of exploration policies. The state decoder maps high-dimensional observations to a small set of possible latent states, while the exploration policies map observations to actions which will lead the agent to each of the latent states. We describe HOMER below.

  • For a given time step, we first learn a decoder for mapping observations to a small set of values using contrastive learning. This procedure works as follows: collect a transition by following a randomly sampled exploration policy from the previous time step until that time step, and then taking a single random action. We use this procedure to sample two transitions shown below.
  • We then flip a coin; if we get heads then we store the transition (x1, a1, x’1), and otherwise we store the imposter transition (x1, a1, x’2). We train a supervised classifier to predict if a given transition (x, a, x’) is real or not.
    This classifier has a special structure which allows us to recover a decoder for time step h.
  • Once we have learned the state decoder, we will learn an exploration policy for every possible value of the decoder (which we call abstract state as they are related to the latent state space). This step is standard can be done using many different approaches such as model-based planning, model-free methods, etc. In the paper we use an existing model-free algorithm called policy search by dynamic programming (PSDP) by Bagnell et al. 2004.
  • We recovered a decoder and a set of exploration policy for this time step. We then keep doing it for every time step and learn a decoder and exploration policy for the whole latent state space. Finally, we can easily optimize any given reward function using any provable planner like PSDP or a model-based algorithm. (The algorithm actually recovers the latent state space up to an inherent ambiguity by combining two different decoders; but I’ll leave that to avoid overloading this post).

Key findings

HOMER achieves the following three properties:

  1. The contrastive learning procedure gives us the right state decoding (we recover up to some inherent ambiguity but I won’t cover it here).
  2. HOMER can learn a set of exploration policies to reach every latent state
  3. HOMER can learn a nearly-optimal policy for any given reward function with high probability. Further, this can be done after exploration part has been performed.

Failure cases of prior RL algorithms

There are many RL algorithms in the literature and many new are proposed every month. It is difficult to do justice to this vast literature in a blog post. It is equally difficult to situate HOMER in this vast literature. However, we show that several very commonly used RL algorithms fail to solve the above problem while HOMER succeeds. One of these is the PPO algorithm, a widely used policy gradient algorithm. In spite of its popular use, PPO is not designed for challenging exploration problems and easily fails. Researchers have made efforts to alleviate this with ad-hoc proposals such as using prediction errors, counts based on auto-encoders, etc. The best alternative approach we found is called Random Network Distillation(RND) which measures novelty of a state based on prediction errors for a fixed randomly initialized network.

Below we show how PPO+RND fails to solve the above problem while HOMER succeeds. We simplify the problem by using a grid pattern where rows represent the state (the top two represents “good” states and bottom row represents “bad” states), and column represents timestep.

We present counter-examples for other algorithms in the paper (see Section 6 here). These counterexamples allow us to find limits of prior work without expensive empirical computation on many domains.

How can I use with HOMER?

We will be providing the code soon as part of a new package release called cereb-rl. You can find it here: https://github.com/cereb-rl and join the discussion here: https://gitter.im/cereb-rl

Critical issues in digital contract tracing

I spent the last month becoming a connoisseur of digital contact tracing approaches since this seems like something where I might be able to help. Many other people have been thinking along similar lines (great), but I also see several misconceptions that even smart and deeply involved people are making.

For the following a key distinction to understand is between proximity and location approaches. In proximity approaches (such as DP3T, TCN, MIT PACT(*), Apple or one of the UW PACT(*) protocols which I am involved in) smartphones use Bluetooth low energy and possibly ultrasonics to discover other smartphones nearby. Location approaches (such as MIT Safe Paths or Israel) instead record the absolute location of the device based on gps, cell tower triangulation, or wifi signals.

Location traces are both poor quality and intrinsically identifying
Many people associate the ability of a phone to determine where it is with the ability to discover where it is with high precision. This is typically incorrect. Common healthcare guidance for possible contact is “within 2 meters for 10 minutes” while location data is often off by 10-100 meters, with varying accuracy due to which location methodology is in use. As an example, approximately everyone in Manhattan may be within 100 meters of someone who later tested positive for COVID-19. Given this inaccuracy, I expect users of a system based on location crossing to simply turn them off due to the large number of false positives.

These location traces, even though they are crude, are also highly identifying. When going about your normal pre-pandemic life, you move from location X to Y to Z. Typically no one else goes from X to Y to Z in the same timeframe (clocks are typically very accurate). If you test positive and make your trace available to help suppress the virus, a store owner with a video camera and a credit card record might de-anonymize you and accuse you of killing someone they care about. Given the stakes here, preserving as much anonymity as possible is critical for convincing people to release the information which is needed to control the virus.

Given this, approaches which upload the location data of users seem likely to have reduced adoption and many false positives. While some governments are choosing to use all location data on an involuntary basis like Israel, the lack of effectiveness compared to proximity based approaches and the draconian compromise of civil liberties are worrisome.

Location traces can be useful in a privacy-preserving way
Understanding the above, people often conclude that location traces are subsumed by alternatives. That’s not true. Location approaches can be made very private by simply never allowing a location trace leave the personal device. While this might feel contradictory to epidemiological success, it’s actually extremely helpful in at least two ways.

  1. People have a pretty poor memory, so when they test positive and someone calls them up to do a contact tracing interview, having a location trace on their phone can be incredibly useful in jogging their memory. Using the location trace this way allows the manual contact tracing process to be much more complete. It can also be made much faster by allowing infected people to prefill much of a contact interview form before they get a call.
  2. The virus is inherently very localized, so public health authorities often want to quickly talk to people at location X or warn people to stay away from location Y until it is cleaned. This can be strongly enabled by on-device location traces. The phone can download all the public health messages in a region and check automatically which are relevant to the phone’s location trace, surfacing those as needed to the user. This provides more power than crossing location traces. A message of “If you were at store X on April 16th, please email w@y.z” allows people to not respond if they went to store V next door.

Both of these capabilities are a part of the UW PACT protocols I worked on for this reason.

Proximity-only approaches have an x2 problem

When people abandon location-based approaches, it’s in favor of proximity-based approaches. For any proximity protocol approach to work, both the infected person and the contact must be running the protocol implying there are two ways for it to fail to be useful.
illustration of x*x
To get a sense of what is necessary, consider the reproduction number of the coronavirus. Estimates vary but a reproduction number of 2.5 is reasonable. That is, the virus might infect 2.5 new people per infected person on average in the absence of countermeasures. To keep an infection with a base reproduction number of 2.5 from exponentiating, it is necessary to reduce the reproduction number to 1 which can be done when 60% of contacts are discovered, assuming (optimistically) no testing error and perfect isolation of discovered contacts before they infect anyone else.

To reach 60% you need 77.5% of people to download and run proximity protocols. This is impossible in many places where smartphones are owned by fewer than 77.5% of the population. Even in places where it’s possible it’s difficult to imagine reaching that level of usage without it being a mandatory part of the operating system that you are forced to use. Even then, subpopulations without smartphones are not covered. The square problem gets worse at lower levels of adoption. At 10% adoption (which corresponds to a hugely popular app), only 1% of contacts can be discovered via this mechanism. Despite the smallness, informing 1% of contacts does have real value in about the same sense that if someone loaned you money with a 1%/week interest rate you would call them a loan shark. At the same time, this is only 1/60th of a solution to getting the reproduction number below 1.

Hence, people advocating for proximity approaches must either hope for pervasive mandatory use (which will still miss subcommunities without smartphones) or accept that proximity approaches are only a part of the picture.

This quadratic structure also implies that the number of successful proximity tracing protocols will be either 0 or 1 in any geographic region. Given that Apple/Google are building a protocol into their OSes, that’s the candidate for the possible 1 in most of the world once it becomes available(**).

This quadratic structure is difficult to avoid. For example, if location traces are crossed with location traces, the same issue comes up. Similarly for proximity tracing, you could imagine recording “wild” bluetooth beacons and then reporting them to avoid the quadratic structure. This however unavoidably reveals contacts publicly which can then cause the positive person to be revealed publicly.

Interestingly, traditional manual contact tracing does not suffer from the quadratic problem. Hence approaches (discussed above) which augment and benefit from manual contact tracing have a linear value structure, which matters enormously with lower levels of adoption.

What works?
The primary thrust of contract tracing needs to be manual, as that is what has worked in countries (like South Korea) which suppressed large outbreaks. Purely digital approaches don’t seem like a credible solution due to issues discussed above. Hybrid approaches with smartphone-based apps can help by complementing manual contact tracing and perhaps via proximity approaches. Getting there requires high levels of adoption, which implies trust is a critical commodity. In addition to navigating the issues above, projects need to be open source, voluntary, useful, and strongly respect privacy (the ACLU recommendations are good here). This is what the CovidSafe project is aimed at in implementing the UW PACT protocols. Projects not navigating the above issues as well are less credible in my understanding.

An acknowledgement: many people have affected my thinking through this process, particularly those on the UW PACT paper and CovidSafe projects.

(*) I have no idea how the name collision occurred. We started using PACT here, 3 weeks ago, and circulated drafts to many people including a subset of the MIT PACT group before putting it on arxiv.

(**) The Apple protocol is a bit worrisome as development there is not particularly open and I have a concern about the crypto protocol. The Tracing Key on page 5, if acquired via hack or subpeona, allows you to prove the location of a device years after the fact. This is not epidemiologically required and there are other protocols without this weakness. Edit: The new version of their protocol addresses this issue.