©2017 by Consortium for Safer AI.


Please reload


Please reload

A Simple Risk Formula for AI


In designing products, there is a simple formula to help track and address safety issues.  The formula is


         Risk = Severity x Probability


Severity refers to the consequence of the unsafe condition being considered.  Probability states the expected probability of the event occurring.  This simple formula demonstrates how even a low severity event could be considered high risk if the probability of its occurrence was very high.  On the flip side, the formula shows that events of high severity and yet extremely low (and maybe unknown) probability of occurrence may need to be ranked as a high risk event.  The former condition, especially when the severity is an existential threat, has been known as a Black Swan event.


Using this simple formula to rank all the possible unsafe conditions that a product might pose to users, product designers then must work towards lowering the highest risk items to below an acceptable threshold.  Reality is that risk is rarely zero in using any product.  In some cases, especially with newer technologies the risk is higher, maybe even higher than we expect.  Again looking at the formula, we can see that this could be due to an incomplete understanding of all the possible unsafe conditions that a new product or technology might present, impacting the Severity factor.  The other explanation is that there is not sufficient data or understanding to be able to accurately predict the probability of failure, impacting the Probability factor. 


All of this assumes a static state which might be appropriate for some products but for most high impact, high penetration products, where there is strong influence on the environment, there is now a dynamic situation and the risk analysis may change with time.  Therefore, the formula implies that each factor is a function of time: 


         Risk(t) = Severity(t) x Probability(t). 


In other words, risk analysis is an ongoing activity.


One area where there has been an attempt to incorporate an element of scale is the field of disaster risk management [1].  It is well recognized that processes and systems, such as climate change and the global economic development, are highly dynamic systems that could display emergent behaviors, sensitivity to initial conditions, multiple solutions, etc.  For this reason, they have taken the Risk formula and made a simple change by adding one more multiplier, called Exposure.


     Risk = Severity x Probability x Exposure


The actual terms for disaster risk modeling formula are slightly different but for our purposes it will make the point. Exposure captures the scale of the product penetration and usage and a measure of the unintended interactions from the environment. 


As an example of how this revised formula can be more insightful, let us look at cigarettes.  A cigarette manufacturer may identify lung cancer as a Severity factor and then assign a probability that a single user will suffer from lung cancer.  What is missing is the emerging risk that arises as the number of users (smokers) increases.  The increase in smokers especially in indoor spaces can lead, as we now know, to a risk of lung cancer to non-smokers in regular proximity of these smokers.  This is known as second-hand smoke.  It will not show up in the manufacturers risk assessment initially, and as the history of litigation with cigarette manufacturers has shown, it is necessary for external organizations and groups to understand and uncover these risks and in turn, put pressure to mitigate the risks as has been done with laws prohibiting smoking within most public facilities.


Now you might be thinking that a cigarette is an unsafe product even under normal conditions.  But there are plenty of other examples.  Crises such as obesity epidemic and anthropogenic climate change can be attributed to the tremendous scaling of technologies, interacting with other technologies while changing their operating environment that have lead to these unanticipated consequences.  


We suggest that in studying the risk associated with AI enabled products, that this three-term risk formula be used for a quick simple assessment.  Though numbers are the inputs and a final number is an output, it is likely to involve a lot of judgement calls on the choice of values for each of the terms.  The important point is that the thought process should be well-documented and there should be a continuous updating from new learnings.  Using this simple formula will help ensure that the conversation on risk of AI enabled products does not narrowly focus on the risks associated with a single product and its user but looks at the changing risk profile as the product might find commercial success and have an impact beyond just the user.  





Please reload

Recent Posts

Please reload