What are the trends in artificial intelligence technology? PwC tells you

[Introduction]: Nowadays, artificial intelligence technology has become the most advanced technology in the field of science and technology. Various technology companies have spent a lot of thoughts on this aspect. The discovery of scholars and enterprise researchers will be good for AI in the coming year and beyond. Prepare, what are the trends in artificial intelligence technology in 2018? Take a look at PwC's data analysis and give you the answer.

1. Deep Learning Theory: Uncovering the Working Principle of Neural Networks

What it is: Deep neural networks, mimicking the human brain, demonstrate their ability to "learn" from image, audio and textual data. However, even after more than a decade of use, we still have a lot of things that don't understand deep learning, including how neural networks learn, or why they perform so well. This may change, thanks to a new theory that applies the information bottleneck principle to deep learning. Essentially, it shows that after the initial fitting phase, the deep neural network will “forget” and compress the noise data, ie the data set containing a large amount of additional meaningless information, while still retaining information about the data representation.

Why it is important: Accurately understand how deep learning makes it bigger and more developed. For example, it can provide insight into optimal network design and architectural choices while improving security—critical or managing application transparency. It is expected that more results will be seen in the exploration of this theory for other types of deep neural networks and deep neural network design.

2. Capsule Network: Simulating the visual processing capabilities of the brain

What it is: Capsule Networks, a new type of deep neural network that processes visual information in the same way as the brain, which means they can maintain a hierarchical relationship. This is in stark contrast to convolutional neural networks, one of the most widely used neural networks, which does not take into account the important spatial hierarchy between simple and complex objects, leading to misclassification and high error rates.

Why it matters: For typical identification tasks, Capsule Networks promises to improve accuracy by reducing errors (up to 50%). They also don't need much data to train the model. It is expected to see the widespread use of capsule networks in many problem areas and deep neural network architectures.

3. Deep reinforcement learning: interact with the environment to solve business problems

What it is: a neural network that learns by interacting with the environment through observation, action, and rewards. Deep Intensive Learning (DRL) has been used to learn game strategies such as Atari and Go - including the famous AlphaGo program, which defeated the human champion.

Why it matters: DRL is the most common purpose of all learning technologies, so it can be used in most business applications. It requires less data to train the model than other technologies. More notably, it can be trained through simulations, eliminating the need to fully label data. Given these advantages, it is expected that there will be more business applications that combine DRL and agent-based simulation in the coming year.

4. Generate a confrontation network: Paired neural network stimulates learning and reduces processing burden

What it is: The Generating Confrontation Network (GAN) is an unsupervised deep learning system that is implemented as two competing neural networks. A network, the generator, produces exactly the same fake data as the real data set. The second network, the discriminator, takes in real and synthetic data. Over time, each network is improved, enabling the pair to learn the entire distribution of a given data set.

Why it is important: GANs are open to a wider range of unsupervised tasks where tagged data does not exist or is too costly. They also reduce the load required for deep neural networks because the two networks share the burden. Expect to see more business applications, such as network detection, using GANs.

5. Lean and enhanced data learning: solving label data challenges

What it is: The biggest challenge of machine learning (especially deep learning) is the ability to train the system with a large amount of tagged data. Two broad techniques can help solve this problem: (1) synthesizing new data; and (2) transferring a training model of one task or domain to another. Such as transfer learning (transferring insights learned from one task/domain to another task/area) or one learning (transfer learning is radicalized, learning occurs only with one or no relevant examples) - making them "lean" Data" learning technology. Similarly, synthesizing new data through simulation or interpolation helps to obtain more data, thereby increasing existing data to improve learning efficiency.

Why it matters: With these technologies, we can solve more problems, especially those with less historical data. Expect to see more lean and more changes to the data, as well as different types of learning that apply to a wide range of business issues.

6. Probability Programming: Simplifying the language of model development

What it is: A high-level programming language that makes it easier for developers to design probabilistic models and then automatically "solve" them. Probabilistic programming languages ​​make it possible to reuse model libraries, support interactive modeling and formal validation, and provide the necessary abstraction layers to support common, efficient reasoning in generic model classes.

Why it is important: Probabilistic programming languages ​​can adapt to the uncertain and incomplete information that is common in business areas. We will see a wide range of applications for these languages ​​and expect them to be suitable for deep learning as well.

7. Hybrid learning model: a method of combining model uncertainty

What it is: Different types of deep neural networks, such as GANs or DRL, have great potential for performance and the wide variety of different types of data. However, the deep learning model does not have model uncertainty, Bayesian methods or probabilistic methods. The hybrid learning model combines two approaches to leverage their strengths. Some examples of hybrid models are Bayesian deep learning, Bayes Gans and Bayesian conditions of Gans.

Why it is important: The hybrid learning model makes it possible to expand the diversity of business problems, including deep learning of uncertainty. This can help us achieve better performance and model interpretability, which in turn can encourage wider adoption. It is expected to see more deep learning methods to obtain Bayesian equivalents, and the combination of probabilistic programming languages ​​begins to combine deep learning.

8. Automated Machine Learning (AutoML): Model creation without programming

What it is: Developing a machine learning model requires a time-consuming and expert-driven workflow that includes data preparation, feature selection, model or technology selection, training, and tuning. AutoML is designed to automate this workflow using a number of different statistical and deep learning techniques.

Why it's so important: Automation is part of the democratization of artificial intelligence tools, enabling business users to develop machine learning models without deep programming background. It will also speed up the time it takes for data scientists to create models. It is expected to see more commercial automation packages and automated integration on larger machine learning platforms.

9. Digital Twins: Virtual replicas that transcend industrial applications

What it is: Digital Twins is a virtual model used to facilitate detailed analysis and monitoring of the body or mental system. The concept of digital twins originated in the industry and is widely used to analyze and monitor windmill farms or industrial systems. Digital twins are now being applied to non-physical objects and processes, including predicting customer behavior, using agent-based modeling (a computational model for simulating the behavior and interaction of autonomous agents) and system dynamics (computer-aided policy analysis) And design methods) behavior.

Why it matters: Digital twins can help promote development and wider adoption of the Internet of Things (IoT), providing a way to predict the diagnosis and maintenance of IoT systems. Looking ahead, more digital twins are expected to be used in both physical system and consumer choice modeling.

10. Interpretable AI: Understanding the black box

What it is: Today, there are many machine learning algorithms in use that can perceive, think and act in a variety of different applications. However, many of these algorithms are considered "black boxes" and there is little insight into how to achieve results. Interpretable AI is a movement to develop machine learning techniques that produces more interpretable models while maintaining predictive accuracy.

What is important is that AI is arguable, provable and transparent, and is critical to building trust in technology and encourages the wider adoption of machine learning techniques. Before starting large-scale deployment of artificial intelligence, companies will use interpretable artificial intelligence as a requirement or best practice, and the government may use explanatory artificial intelligence as a future regulatory requirement.

After reading so much, the AI ​​people have a general direction for the future. What changes will be made to the life of artificial intelligence in the future? This is unimaginable. Maybe the technicians in the lab know the answer. .

Ego E cigarette

Suizhou simi intelligent technology development co., LTD , https://www.msmsmart.com