Tools that make it easier to ascertain the truthfulness or falsity of claims made online whether by traditional media or IWIO perpetrators are unlikely to impact the thought processes of hard-core partisans but may facilitate more rational thought for individuals who have not yet become impervious to reason and fact. Along these lines, researchers at Indiana University have provided an example of tools to support computational fact-checking that help humans to rapidly assess the veracity of dubious claims.
Information Warfare (IW)
Slovic has suggested that one way to combat the use of the affect heuristic taking advantage of a positive feeling about a situation is to create a negative feeling about the same situation. By so doing, Slovic speculates in a personal conversation with me that individuals may have to reconcile two contradictory feelings and thus could possibly use a more deliberative mode of thought; this question is, of course, researchable. It is also possible to take biases into account when formulating a response strategy. For example, the repetition of false statements often increases belief in those falsehoods, even when such repetition occurs in the context of refuting the statements.
A second example might involve responding to commentators pointing to a doubling of the likelihood that a bad event would occur in a given time frame that is, a probability that p may become 2p. But if p is small in magnitude, the likelihood of the bad event not happening, or 1-p, is not very different from the quantity p.
- Developing Responses to Cyber-Enabled Information Warfare and Influence Operations.
- The Ministry of Information, INF series and INF 3;
- Never Miss the Latest News?
- The Art of Information Warfare: Insight into the Knowledge Warrior Philosophy - PDF Free Download;
- The Art of War Quotes by Sun Tzu?
- For All Eternity!
Emphasizing the latter point may prove more effective in political discourse than arguing about the consequences of the former point. A claim that the risk of failure is doubled when the original likelihood of failure is 1 percent simply means that the chances of success are still very good going from 99 percent to 98 percent. Again, the value of this type of reframing is researchable. A third approach calls for educational efforts to improve the ability of a populace to think critically about their consumption of media.
Deception and Foreknowledge
In K curriculums, states should encourage a widespread refocusing on critical reading and analysis skills for the digital age. Introductory seminars at universities should include a crash course in sourcing and emotional manipulation in the media. The second broad category of measures to defend against IWIO involves measures to degrade, disrupt or expose the arsenal of weapons being leveraged against a target population.
For the most part, the party responsible for taking these measures would be infrastructural entities in the information environment: social media companies, news organizations and the like. One category of such measures includes support for fact checkers.
For example, the Poynter Institute, a nonprofit entity that includes a journalism school, has established what it calls the International Fact-Checking Network and sought to promulgate a code of principles that promote excellence in fact-checking that will in turn promote accountability in journalism. According to the Poynter website, these principles include commitments to nonpartisanship and fairness; transparency of sources; transparency of funding and organization; transparency of methodology; and open and honest corrections. In late , Facebook announced that being a signatory to this code is a condition for providing fact-checking services to Facebook users.
Facebook has also introduced a button that makes it much easier for users to signal that they regard a given story as fake news. By combining such indicators with other signals, Facebook seeks to identify stories that are worth fact-checking and send such stories to fact-checking organizations. A second category consists of measures to disrupt the financial incentives for providing fake news. For example, the New York Times reported in November on a commercial operation in Tbilisi, Georgia, intended to make money by posting a mix of true and false stories that praised Donald Trump.
Google has also announced plans to prevent its advertisements from appearing on fake news sites, thus depriving them of revenue.
- Advances in Immunology: 80.
- # CULTURAL TRANSFORMATION tweet Book01: Business Advice on Agility and Communication Across Cultures!
- Mit kollegialen Grüßen ...: Sprachdummheiten in der Medizin (German Edition).
- Information warfare. Historical excursus and Russia’s position;
A third category involves measures to reduce the volume of automated amplifiers of fake or misleading information e. Of course, a precondition for any such measure to work is the ability to identify automated amplifiers as discussed above. To date, such study has been performed by independent researchers. It stands to reason, however, that infrastructure providers such as Twitter and Facebook would be in a better position to identify automated accounts, and there is no particular reason that a different set of enforceable rules i. Research is needed to help providers distinguish more effectively between legitimate and illegitimate automated accounts and how different terms of service might be applied to them.
You are here
A fourth approach is aimed at greater transparency of political traffic carried on social media. For example, the Washington Post reported as early as that Facebook was seeking to develop the capability to show tailored political ads to very small groups of Facebook users. With this capability, it is possible to convey entirely different messages to different groups of people, thus enabling political campaigns to target messages precisely calibrated to address the particular hot-button and motivating issues of interest to specific groups.
Indeed, such messages could even be contradictory and the broader population would never know about it if the ads were not made public. To increase transparency, Facebook has imposed a set of requirements on political advertising. Specficially, ads with political content appearing on Facebook are required to include information about who paid for them.
All such ads appearing after May 7, , will also be made available for public perusal in a Facebook Ad Archive. Along with each ad is displayed information about the total amounts spent, the number of ad impressions delivered, demographic information age, location, gender about the audience that saw the ad and the name of the party that paid for the ad. An interesting question is the definition of a political ad. Ads that mention particular candidates for office are an easy call. Harder-to-handle cases include issue-oriented ads that say nothing about particular candidates or political parties but are nevertheless intended to promote or detract from one side or another in an election.
Facebook has developed an initial list of topics that it regards as political. These topics include abortion, budget, civil rights, crime, economy, education, energy, environment, foreign policy, government reform, guns, health, immigration, infrastructure, military, poverty, social security, taxes, terrorism, and values, but Facebook explicitly notes that this list may evolve over time. Facebook also requires that a political advertiser pass an authorization process that verifies his or her identity and residential mailing address and also discloses who is paying for the ad s in question.
To deal with advertisers that should have gone through the authorization process but did not, Facebook is investing in artificial intelligence tools to examine ads, adding more people to help find rogue advertisers, and encouraging Facebook users to report unlabeled political ads. However, the database on which these tools will be trained to recognize political ads has not been made public, so it is impossible to judge the efficacy of such tools. A fifth approach, likely more relevant for future, rather than current, IWIO threats focuses on forensics to detect forged email, videos, audio and so on.
While the authenticity of those emails was not an issue at the time, consider the potential damage if altered or otherwise forged documents had been inserted into those email dumps. Such messages might contain damaging information that would further the goals of those behind the IWIO campaign, and conflict would arise when public attention was later called to them.
This tactic would be an effective method for those behind IWIO campaigns to work faster than the news cycle. While legitimate investigators, analysts and journalists pored over the documents, the adversary would be able to point directly to the falsely incriminating emails. The hacking victim or victims would meanwhile have a difficult time persuading or proving to the mostly inattentive public that the messages were false because they were found in the context of legitimate emails. Wide distribution of such clips would be even more difficult to refute because of widespread prejudices and confidence in the reliability of visual or audio information.
Visual and audio information associated with specific events once had dispositive value for authenticating events, conversations and other exchanges, but with Photoshop and audio and video editing software widely available—and constantly advancing—this assumption of certitude is simply no longer valid, and the authenticity of images and recordings will be increasingly debated rather than automatically trusted.
It is U. This is not to say that the United States has never engaged in such operations, only that is it officially eschewed. The United States has traditionally engaged in open information warfare and influence operations under the rubric of public information programs—the development of narratives to counter the messages delivered through adversary IWIO. Of course, the targets of such U. Some analysts cite with approval the activities of the Active Measures Working Group, an interagency group founded in and led by the State Department and later by the U. Information Agency to counter aggressive Soviet propaganda and disinformation during the Cold War.
Another type of offensive response seeks to impose costs on the perpetrators of information warfare as individuals. For example, Executive Order , issued on Dec.
What is Information Warfare? | SpringerLink
If a few people are responsible for a substantial portion of election-related disinformation or influence activities, financial sanctions directed at these individuals could have a disproportionately large effect on the perpetrators while also deterring others from engaging in such activities. Other, more forceful sanctions could be levied against key perpetrators if not under the auspices of this executive order then perhaps under covert-action authority.
In all cases, of course, policymakers should be mindful of the risks of escalation. A third type of offensive response redirects the techniques of IWIO against perpetrators. For example, after a large number of Macron campaign e-mails were leaked at a pivotal point just before the French presidential election in May , the campaign announced that it had preemptively defended against those seeking to compromise its emails by inserting significant amounts of falsified information into email accounts it knew would be compromised.
In this instance, as the Daily Beast reported , the use of IWIO techniques against the perpetrators created uncertainty about the information they were trying to publicize and reduced the effectiveness of their manipulations.
Information warfare. Historical excursus and Russia’s position
The above sketch of possible responses to cyber-enabled information warfare and influence operations is, of course, not exhaustive. Nevertheless, it is possible to suggest that some are less likely to succeed than others. White IWIO campaigns—that is, those conducted with open acknowledgement of the source—are by design immune to naming and shaming.
Taking into account, that the Internet by itself is an invention of the Pentagon and the majority of social networks and applications were initially developed by American companies, the US is the undisputed leader in this field including in military operations such as cyber forces. But Russia is also gradually mastering cyber technologies which can be implemented in the work of current agencies, including military structures.
In , three subdivisions were created in the Russian Ministry of Defence that deal with research in the sphere of technology, information, and communications. Another MoD structure is selecting specialists on information security, who are capable of analysing the security of systems against hackers and decode telecommunications protocols.