Artificial intelligence is a fantastic tool when it comes to health, technology or astrophysics services. But in the wrong hands, it can also be used for criminal purposes or for deception. And the worst is not always where you think.
Self-driving car hacking orMilitary, offensive Targets, fabricated news or manipulation of financial markets … “The expansion of the capabilities of AI-based technology is accompanied by an increase in the likelihood of criminal exploitation,” warns computer science researcher Lewis Griffin at University College London (UCL). With his colleagues, he compiled And ranked them in order of potential loss, gain or gain, ease of implementation, and difficulty in identifying and stopping.
The most frightening crimes, such as “robots” entering your apartment, are not necessarily the most dangerous, as they can easily fail and affect some people at the same time. On the contrary,A “bot” has the potential to tarnish or blackmail a known person Hard to deal with, these “dipfecks” can cause considerable economic and social damage.
Artificial Intelligence: Serious Threats
- Fake video : By taking away a person’s identity That he never said or did, to request access to secure data, to manipulate opinions or to damage someone’s reputation … these fake videos are almost unrecognizable.
- Self-driving car hack : Using it as a weapon (such as carrying out a terrorist attack, because a Etc.).
- Custom phishing : Create personalized and automated massages to increase efficiency For the purpose of collecting or installing secure information .
- Hacking AI-controlled systems : For example a Traffic jams or disruption of food supply.
- Large-scale blackmail : Collect personal information to send automatic threat messages. AI can also be used to create false evidence (such as “sextroation”).
- False information written by AI : Seems to have been issued by a reliable source. AI can also be used to create multiple versions of specific content to increase visibility and reliability.
Artificial Intelligence: Medium-intensity threat
- Military robot : Take control Or weapons for criminal purposes. A potentially extremely dangerous threat but difficult to implement, military equipment is usually very secure.
- Scandal : Sell fraudulent services using AI. There are many notable historical examples Who have successfully sold expensive counterfeit technology to large corporations, including the national government and the military.
- Data corruption : Introduce false information to change or intentionally induce certain biases. For example, resisting a detector from a weapon or encouraging an algorithm to invest in this or that market.
- Learning-Based Cyber Attacks : Run both specific and comprehensive types of attacks, for example, using AI to test system vulnerabilities before launching multiple simultaneous attacks.
- Autonomous attack drones : Averse to Autonomous or use them to attack a target. Working with these drones can be particularly threatening Inside .
- Denial of access : Harm or deprive users of access to financial services, employment, public services or social activities. Not profitable in itself, this tactic can be used as blackmail.
- Facial recognition : For example creating fake identity photos (access to a smartphone, surveillance camera, passenger test, etc.)
- Financial market manipulation : Contamination of trading algorithms to the detriment of competitors, artificially lowering or increasing a value, causes financial catastrophe …
Artificial Intelligence: Threats of low intensity
- Exploitation of superstition : Taking advantage of existing biases of algorithms, for example Of The channel’s viewership number or Google ranking to increase product profile or to discredit competitors.
- Thief robot : Use small autonomous robots sleeping in the mailbox or To retrieve or open the key . The damage is potentially minimal, as it is very local on a small scale.
- AI detection blocking : Disable data sorting and collection by AI to remove evidence or conceal criminal information (e.g. pornography)
- Fake review written by AI : Such as creating fake reviews on the site Or Tripadvisor is harmful or in favor of a product.
- AI-assisted tracking : Use a learning system to track a person’s location and activity.
- Duplicate : Creating fake content, e.g. Or music, which may be sold under false authors. The possibility of this Familiar painting or music remains weak due to lack of.