Criminal neural network. Can hackers turn artificial intelligence into evil?

Brother

Professional
Messages
2,565
Reputation
3
Reaction score
362
Points
83
c37ba9ed98a7c8f631759.png


Our world is changing and new technologies bring with them not only new opportunities, but also new dangers. The material is devoted to neural networks. But not the things that neural networks themselves are capable of - everyone writes about this anyway - but what potential harm neural networks can bring in the hands of hackers.

The image of a hacker in modern media is the image of a magician, a miracle worker, in whom scripts, viruses and much more play the role of spells. Hackers alone, practically from the phone, are able to hack the most advanced security system, break through any firewall. Criminal hackers in this context are portrayed as evil wizards using modern tools for evil. One such tool is, of course, artificial intelligence.

What is AI?​

Artificial intelligence is an umbrella term that includes quite a few different tools related to neural networks, big data, and machine learning. Most often, it is used to refer to a neural network trained to solve any, most often interpretive, task: speech recognition, image recognition, translation of texts with context from language to language, and so on.

An ordinary neural network is, in the first approximation, a complex mathematical function f (x, a), which depends on a set of arguments x and a certain set of parameters a. It receives data of a certain format as input and outputs the result.

The creation of the simplest neural network looks like this. First, the task is determined, then the type of neural network is selected. Next is an array of labeled data. The neural network is trained on these data, that is, in the automatic mode, such values of the parameters a are selected so that the function takes the required values on the marked sample.

Finally, the network is tested, and after that, in principle, it is ready to work - if nothing broke at all the previous stages, if the sample was good, if it was possible to guess the correct network architecture, and so on.

An important nuance: neural networks are a highly specialized tool, sharpened for solving a specific problem. Attempts to retrain a neural network or expand the scope of its application lead to the fact that it not only does not learn new things, but also “loses” the skill of solving the previous problem.

Apparently, narrow specialization is a mathematical property that does not depend on the physical implementation of this tool. This means that if you need to solve a problem, albeit close, but different from the one that the existing neural network solves, then you will have to re-develop a new one.

Finally, the neural network is not programmed in the usual sense of the word. This means that checking the theoretical correctness of the result obtained (like checking an algorithm in a program) requires the creation of new methods and tools. In other words, the issue of work safety - and by safety we mean the absence of failures - is an unsolved problem for which the right tools have yet to be created.

What do modern hackers do?​

Modern hackers have little to do with the image of the lone professional that is portrayed in television series. Hackers who engage in illegal activities are mostly part of organized criminal groups, which often include a large number of programmers of a relatively low level.

Their main goal, like any organized criminal group, is to make a profit with minimal costs and risks. This determines the main areas of work of cybercriminals: the lion's share of the crimes they commit is tied to theft of users' personal data, fraud and extortion.

Accordingly, there are five main types of cyber threats.

More than half of all cyber threats are malware infections of computers. In addition to the usual viruses on websites, hackers have long been using more curious technologies - for example, embedding malicious code into open-source projects or infecting servers to automatically update software.

In the latter case, tens (and possibly hundreds) of thousands of computers around the world were infected through such a system.

Another type is individual fraudulent attacks. They imply direct contact with the user in order to convince a person to give data or voluntarily install an application containing malicious code. Such attacks often involve real people who can pretend to be a support service, bank employees, and so on.

Such attacks most often involve outright theft of money. Sometimes an attack results in infection with a virus that encodes data and requires money transfer to unblock it (this is a virtual form of extortion).

Classic hacking of systems using programs, scripts, and so on. This type of crime accounts for just under 17 percent of all cyber threats.

This is often part of a global scheme - for example, malware infection of servers. One of the main targets of hacker attacks (in addition to traditional government institutions) is crypto exchanges and other platforms related to cryptocurrency.

Attacks on websites are still popular. Basically, we are talking about online stores and other sites that accumulate personal information of their users. This data is either simply stolen, or an extortion scheme is used.

Finally, the last type is the non-obsolete DDos attacks. In July 2019, they turned exactly 20 years old. The main idea here is to create a network of bots - computers that are ready to send requests to a given server on command, overloading it.

9dd8d30cfd4567c362502.gif


Do hackers need neural networks?​

In the media, stories are quite common that indicate the potential danger of neural networks in the hands of hackers.

It is believed that neural networks will make existing threats more dangerous:

1) Creation of neural networks that can simulate communication with a person, for example, by e-mail. The implication is that the neural network needs to hold out long enough for the recipient to believe that someone he trusts is communicating with him. In this case, the user must transfer money to the "interlocutor" or disclose private data.

2) HDDos or "human" version of the DDos attack. Nowadays, many systems are able to recognize DDos attacks and fight them quite effectively. It is understood that the use of not just a botnet to generate requests, but neural network bots will help create the illusion that the site has undergone an influx of live users.

3) Overcoming anti-virus protection systems. It is believed that neural networks will help in infecting computers with malware.

All three scenarios are highly unlikely and frankly fantastic. For example, if neural networks of the first type are created, then they will be primarily useful for automatic marketing systems, that is, they will appear in the hands of marketers. Further, it is more profitable to use neural networks that imitate the behavior of living users to simulate large traffic on sites than to crash them. Finally, neural networks that bypass something there are quite distant, and not the fact that realizable, the future.

There are, however, more exotic scenarios.

In 2018, hospitals and clinics were targeted several times by hacker attacks. In one case, hospital management had to pay hackers a ransom in bitcoins to regain access to patient data.

The fact is that in terms of IT (including in terms of security), medical institutions lag far behind other institutions dealing with personal data. Often, medical institutions use software products that, according to experts, are vulnerable to all sorts of hacker attacks.

Fantasy and reality​

Several Israeli researchers decided to find out what the maximum damage to a medical facility could be caused by hackers armed with a neural network. To do this, they came up with the following scenario: through vulnerable software hackers gain access to personal data of patients, in particular to X-rays.

Suppose hackers are using a neural network trained for a very special task: "removing" cancerous tumors from X-rays. Such interference with personal data, the researchers say, can be fatal for patients.

There are quite a few similar articles, but like this particular article, they do not take into account many factors.

First, the neural network mentioned in this study can only be created with the participation of a large number of specialists, and it must be trained on the data obtained for specific medical research. One should not expect that the average hacker group will have access to such a training set.

Second, the scenario described involves a very expensive and very special attack on data. Interestingly, the article talks about changing images, but not other records, such as patient analyzes. This means that the authors of the work themselves potentially greatly simplify the complexity of their invented attack aimed at interfering with medical data in order to influence the diagnosis.

It is not surprising, therefore, that most experts agree that existing neural networks are not suitable for current hacking tasks due to the high complexity and cost of training. If new threats associated with neural networks do arise, it will be exclusively along with new ways of using them.

So hackers don't need neural networks?

Malicious software can cause significant damage to industries. For example, WannaCry, a virus that was not originally intended to attack factory systems, caused production to halt at several Renault plants and also entered Nissan's production systems.

In other words, there are two parts to the correct answer.

Firstly, ordinary hackers do not need neural networks, but they can find their use in industrial espionage and attacks at the state level, or even have already found them.

Secondly, progress does not stand still. If now working with neural networks is long and expensive, this does not mean that it will always be so. The emergence of standardized libraries, labeled data sets in the public domain can radically change the situation. And then crimes using neural networks can enter the life of an ordinary user. This means that we need to think about these threats now.
 

Tomcat

Professional
Messages
2,656
Reputation
10
Reaction score
647
Points
113

Artificial intelligence taught to identify a person under substances​


fe3608aaecd17ffc7a9e7.png


A group of neuroscientists, artificial intelligence specialists, and psychologists at IBM have developed a new method for determining if someone is intoxicated with MDMA by simply analyzing a person's speech patterns.
In a study published in the journal Neuropsychopharmacology, researchers were able to determine with almost 90% accuracy if someone was intoxicated with MDMA based on the types of words and emotions expressed in short segments of speech. This detection method can distinguish MDMA use from conventional oxytocin when a person is in love, for example. MDMA produces similar effects, but still significantly different from oxytocin.
31 subjects were studied: 12 women and 19 men. They two separate speech tasks four times each performed for a total of eight tests for each subject. All subjects received either a placebo, two different doses of MDMA (0.75 mg / kg and 1.5 mg / kg), then oxytocin (20 IU) so that researchers could identify differences in speech patterns among subjects.
The experimental procedure was performed in a double-blind and randomized fashion, which means that neither the subjects nor the researchers knew what the subjects were getting on each test.
The first speech task required subjects to describe someone who was very important to them within 5 minutes. In the second task, the subjects were asked to speak freely, as much or little, about anything, remaining alone for 5 minutes. All speech was recorded for machine learning analysis. Subjects taking MDMA exhibited markedly different speech patterns compared to those taking oxytocin or placebo alone, and these were more pronounced at higher doses of MDMA.
Basically, people on ecstasy used more words related to intimacy, understanding and emotion. In addition, their speech showed more cases of nervousness, as well as different vowel pronunciations and increased use of adjectives and nouns.
Thus, it can be said that someday in the foreseeable future, doctors will be able to determine if you are taking MDMA or possibly other drugs such as marijuana, alcohol, meth, cocaine or heroin, simply by recording your conversation and checking on a special device.

5892d0d36a612580ebbe0.png


If the doctors are able to do this, it means the police too, which threatens our freedom. Australian authorities are currently testing infrared cameras to catch people on MDMA at festivals based on body heat levels alone - no breathalyzers, blood sampling, or urine or saliva samples are required.
 
Top