A debate inspired by an article by BBC reporter Dave Lee, where he questions how to report when there are cyber attacks. Listed below there are some points to highlight the main themes analyzed.
"Cyberwar will be the defining story of our generation, but right now we’re dangerously unequipped to report on it accurately."
Lee mentions cyberwar, and the analogy that comes to mind is "war." The abstract nature of cyberwar makes understanding very difficult. Today, who is at war with whom? How could those who were to receive a cyber attack identify the attacker? If I am attacked, am I really the target of the attack or a false target, or am I selected by unknown criteria?
The first difference to be explored is between a targeted attack and a massive attack. A computer security consultant is often required to manage incidents from concerned customers who discover that one of their computers used in the office or their servers have been compromised.
According to them the attack was targeted. Does it come from a competitor? Or from the government? The hacker friend of a lover? (I am not making this up. This is what people think when attacked). However, the vast majority of attacks are not targeted. Everybody is attacked. Who judges it? Under what parameters? This is the first question to be answered by the expert. Without understanding the type of attack, knowing the sensitivity of the data contained in the computer, the necessary information to frame the case is lacking.
"In years gone by, when reporting on war, journalists have sometimes needed to be inventive in getting the news out."
In a physical context such as war it is easy to make assumptions and hypotheses and then obtain conclusions. Someone invades, someone resists, targets are conquered. If we learned this in history lessons in elementary school, and then deepened it over the years, the same has not happened with the digital equivalent. The use of a wrong term such as, for example, to confuse a distributed denial of service with a denial of service based on few data packets affects the entire understanding of the motivations and resources at stake. Since it is not (yet) possible to claim that journalists have an understanding of this level of detail or know how to communicate it to the readers, the effect of improvisation will create a mistreatment in a technological environment (an environment that also hardly understands what is going on. 99% of technicians are ignorant in terms of safety. They have no incentive to understand the meaning, despite the responsibility of the weaknesses in that category).
"There is a duty not to compromise the operations of the military, particularly when lives are at risk."
Dave Lee makes another point that is not yet clear, neither for reporters, nor for netizens. The level of interconnection between lives and services determines that something as seemingly insignificant as the future of the credentials of an online dating website may lead to an attack on other realities. Compromising an online dating forum can be used for attacks to reuse passwords or spear phishing on sensitive targets. The person that you are on Facebook and OKCupid is the same as the one that the next day, through the same device, works on the business data. The user perceives this distinction and maybe is trained in special precautions on business data, but only if the attack hits when the psychological barriers were lowered. The attack can climb up to sensitive data. At that point, it is fair to explain the correlation so as to increase public awareness, or it is better to let the news remain private, or not to mention the actors. And who can say that the attack has ended there, and does not serve a future escalation?
"But as history shows, governments on both sides of a conflict mostly do whatever they can, to share as little as possible. And what they do share is often propaganda."
Propaganda is a great risk. Digital information can be created on purpose to deceive the analyst or, at least, to make more confusing investigations. Who makes the analysis could change the perception and this is a risk achievable only if the institution knows what happened, and can afford to change the perception. The risk of disseminating propaganda when the event is not clear is an embarrassing mistake. Let's consider data leaks made to damage the image of a company or an institution: how were they made? By an insider? Through a computer sold as used where old data was not deleted? By an attacker with a plan or someone who has had a stroke of luck? Lying when the attacker can still release revelations that invalidate your statement is very risky. For this reason everything is diminished, to reassure, as if the problem has already been solved...
"There are no sounds of gunshots to make it obvious something is happening."
"There are no soldiers on the ground that can be observed and asked: “Who do you work for?”
What can be recounted by a reporter also depends on the transparency with which the victim discloses details about the attack. In some cases, these details are implicitly descriptions of vulnerabilities, so they cannot be disclosed until the weakness is corrected. This can take a few hours or weeks. Another option, regardless of the details of the type of attack, is that the victim makes an assessment of the stolen data and the worst-case consequences. We are considering the worst cases because ignoring the loss of data is worse than managing the incident. Customers and partners can adapt to the risk, and keep in mind that interactions with other situation can always be tampered with. For this reason, having a recovery procedure and applying it when needed does not hurt, and could lead to regular practices. We need to explain that responsible disclosure does not mean a loss of image, but a gesture of civility towards a network society based on trust.
Making aggregate statistics reports can be a solution, but before the issue of responsible disclosure there is a serious problem of analysis of the data, access control and detection of anomalies.