Using machine learning to reduce risk is fantastic

Enterprises are now deploying machine learning. The three main reasons businesses want their employees to use machine learning include: cost savings, faster processing of massive amounts of data, and faster discovery of new vulnerabilities.
​Large
retailers use machine learning applications to spot fraudulent transactions in e-commerce while preventing legitimate transactions from being blocked. They use machine learning to analyze customer attitudes toward products and identify attackers posing as long-term customers.
​Financial
institutions use machine learning applications or systems to predict loan defaults as well as fraud and money laundering; hospitals can use machine learning to predict saveable emergency room wait times, predictable strokes and seizures, and wasteful readmissions; large law firms The firm can use machine learning to help lawyers make quicker decisions about which cases to choose, and legal robots are trained to determine whether corporate contracts contain all the necessary clauses. ​Other applications of
machine ​Machine learning comes with risks​ Even the best machine learning models carry risks, including false positives due to poor learning algorithms, which can be exploited by attackers. At the same time, machine learning models may also obtain infected data from recently attacked hosts. No false positives do not mean there is no risk, and attackers can exploit vulnerabilities in systems running machine learning application platforms. One of the risks to machine Another risk is that attackers can trick machine learning models into classifying malicious training samples into legitimate classes during testing or execution. This can lead to machine learning models producing completely different results than expected. ​Machine Learning Risk Management Here are five ways to reduce the risk of machine learning applications: ​1. Execute ethical attacks













​An ethical
attack is when a trusted security expert hacks into a system to discover machine learning vulnerabilities that are overlooked by firewalls, intrusion detection systems, or any other security tool. In gaining access, ethical attackers use fake fingerprints reconstructed from fingerprints left by legitimate users on the device. Once inside the system, an ethical attacker could infiltrate the fingerprint database, obtain another legitimate user's biometric template, and reconstruct a fake fingerprint. To counter this risk, device readers must be cleaned after each use, and databases should be encrypted.
​2.
Encrypted Security Logs
​ System
administrators have superuser privileges to analyze machine learning log files for reasons including: checking compliance with security policies, troubleshooting systems, and forensics. Encrypting log files is one way to prevent log files from being attacked. The encryption keys required to change log contents are not exposed to malicious attackers, and administrators are immediately alerted if an attacker attempts to delete log files.
​3.
Clean training data ​Machine learning models work well
when
The model developer must know where this data comes from, and it must be clean data, not anomalous or infected data. If the data source host is compromised, the data should stop being used. Bad data can cause machine models to fail to work well, eventually causing the system to shut down. When using machine learning tools to evaluate data for a specific purpose, model developers should convert all data into a common format.
​4.
Adopt DevOps for the model life cycle.
Attackers
can exploit false positives from machine learning platforms. For this risk, we can apply DevOps to the machine learning model lifecycle, which allows development and training, quality assurance, and production teams to collaborate.

DevOps will start with the development and training phase, and then move on to the quality assurance phase to see how the model is training. Unsatisfactory test results mean the need to go back to the development stage and provide the model with better data. If the test results are good, the model goes into production, processing real-world data. If the results are not as expected, DevOps should be repeated again from the development or quality assurance phase.
​5.
Deploy the security policy
​Finally
, we should also deploy the security policy. In the simple case, a security policy should consist of five parts: purpose, scope, context, actions, and limitations. Scope determines what is covered: machine learning model types, training data, and data mining algorithms (regression, clustering, or neural networks). The background section looks at the reasons behind the policy, the action section looks at how DevOps can be leveraged to reduce risk, and the limitations section looks at the limitations of machine learning and the availability of test data. (Author: Judith M. Myerson Translator: Zou Zheng Source: TechTarget China)

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326608726&siteId=291194637