Mon. Jun 17th, 2019

The Best of the Machine Learning Algorithms Used in Artificial Intelligence

Digital world is taking us for joy ride. We will be able to guide only some part of the Artificial Intelligence revolution but disruption and natural evolution through machine learning algorithms will take care of the rest. We can combine nature inspired algorithm with machine learning to improve accuracy level of our data. For example, genetic algorithms can help turning hyper parameters or choosing features.

Realted: Advanced Natural Language Generation (NLG)

“Artificial Intelligence” is no match for “Natural Stupidity”. Dependency of humans will reduce for mundane tasks. Initially, Artificial Intelligence would handle routine repetitive tasks in organizations. We are few years away when we will slowly be integrated with corporate applications. This will eliminate many manual back end processing jobs with the help of supervised and unsupervised machine learning algorithms.

Humanity still has a long ways to go before machine artificial intelligence can take on anything close to resembling sentience. Hardware being the limiting factor. A generic learning pattern requires an incredible amount of hardware resources but software.

Artificial Intelligence Algorithms

Machines just need a shorthand way to do things like checking the current weather or adding an event to your calendar. The technique with which machines achieve such results is called Artificial Intelligence.

Machine Learning:

Machine learning is a top strategic trend for 2016, according to Gartner. And Ovum predicts that machine learning will be a necessary element for data preparation and predictive analysis in businesses moving forward. Machine learning (ML) is a discipline where a program or system can learn from existing data and dynamically alter its behavior based on the ever-changing data. Therefore, the system has the ability to learn without being explicitly programmed. Machine Learning algorithms can be broadly categorized as classification, clustering, regression, dimensionality reduction and anomaly detection etc.

Machine Learning and Cognitive Systems

Cognitive Computing has interesting use cases catering to multiple industries and functions. Routine repetitive applications initially will get automated through Workplace Artificial Intelligence. Next would come corporate applications starting to add value to the business. The machine Learning module acts as the core computing engine, which using algorithms & techniques helps Cognitive Systems to identify patterns, perform complex tasks like prediction, estimation, forecasting and anomaly detection. Pen source machine learning libraries like Mahout, Spark ML have made machine learning algorithms accessible to a wider audience. With larger and more complex data sets entering the health care field, machine learning models and AI will become table stakes, and health care incumbents will have to find ways to use these algorithms as well. Apple, Google, Microsoft, Intel and IBM played a key role in making deep learning capabilities accessible to the developer community through their Cognitive Services & APIs which could be easily embedded into other applications.

Artificial Intelligence is impossible until we master Quantum Computing since these systems are not ‘self-aware’. These are technologies that deal with what are known as ‘difficult’ problems where conventional programming is not suited. Artificial Intelligence would not sprung up from nothing. Even if it self-replicates itself as in Science Fiction movie like Terminator according to Jon Von Neumann’s self-replication machine, the materials for robots to build robots themselves would not materialize out of thin air. What intimates our study is four things:

  • Who will control and monitor the daily transactions of this database and for what purpose?
  • What if the system crashes after we become depending on it?
  • Artificial Intelligence will always be as intelligent as you let it be. If artificial intelligence does not learn, will it be useless?
  • How would the artificial intelligence manager respond to an emergency situation, a one-in-a-million incident that it has no data on?

The Human Factor in Machine Learning Algorithms

The fact that Artificial Intelligence exists is because of capitalism. If capitalism ends, then Artificial Intelligence ends. What will happen to the people who become unemployed due to this technological factor is a matter of concern. Economics means of production still exists. Someone has to produce the raw materials, be it a mining company that’s listed on Stock market, which buys trucks from another vehicle manufacturer etc. and so forth in order for productions to take place. Artificial Intelligence scientists believe there is a 50% chance that artificial intelligence will reach HLMI (Hi Level Machine Intelligence) by 2040-2050. That increases to 90% by 2075. High Level Machine Intelligence is defined as being able to carry out most human professions at least as well as a typical human.

Realted: Business Intelligence involving Ethics and Maturity

In the U.S., the bottom 25-33 percent of income earners could basically vanish without significantly disrupting the national economy. As automation increases, our focus should be on tasks that utilize creativity and emotion’ resonates strongly. Artificial intelligence is guided to be able to sift through massive amounts of data. There is, however; one concern that people will use machines as an excuse not to learn the skills to be able to critically think in their fields. For example, the last thing we want is people to make decisions based on “The robot told us” instead of understanding why it got to that conclusion.

Although having Artificial Intelligence is helpful, we believe, it cannot replace experts. There is certainly a sense of fear around the impact of artificial intelligence making support functions redundant. That said, it’s an exciting time to think about the jobs of the future and how we can best utilities the qualities we possess as humans that cannot be mimicked.

In Australia, CEDA (Committee for Economic Development of Australia) predicts that technology will replace 40% of the workforce within 20 years. PWC predict 44% in the same time frame. It is not just support functions that will be replaced by artificial intelligence, higher value, professional jobs are also being targeted.

Technology is a tool and we should drive it than letting it drive us. We see people unable to read maps who will blindly follow SATNAV, TOMTOM, GARMIN and they don’t know where they are so if the system fails. European Union has already considered an electronic persons identifier for jobs that once were done physically by humans in order to replace the taxable income depletion or to develop a fund in case a human sues an electronic person due to a negative interaction. That is why, there are so many factors to consider because Information Technology people or services are not just plug and play like many decision makers may think. The human factor ‘must’ prevail over artificial judgement.

The movie Sully is a good example

While the main character Captain Sully makes the decision of landing in the river Hudson which was right, the simulated version showed he could easily return back to the runway which was absolutely incorrect. Later, it was the very humans who accepted that the technology that they relied upon was not right and Captain Sully’s decision was absolutely right. So we are all hands up for an expert over ARTIFICIAL INTELLIGENCE. The best practical approach to find the best or good algorithms for a given problem is trial and error. Heuristics provide a good guide, but sometimes/often you can get best results by breaking some rules or modeling assumptions.

How Will Artificial Intelligence Transform The Workplace

People are afraid of artificial intelligence machines. People misuse, abuse, and overuse such items. Are these artificial intelligence machines disposable or serviceable or simply replaceable? Artificial intelligence in the workplace makes the working environment dull and boring. So far, economic activity at large almost invents grunt work so large populations can be employed fully, whether an entrepreneur or the umpteenth levels of social hierarchy below them. What happens when Workplace Artificial Intelligence removes grunt work or cannot distinguish social impact and productivity impact is a question answered with the help of machine learning algorithms. What happens to all the people who actually depend on grunt work and how would they purpose themselves is another debate. Likely, these people would not be able to re-purpose themselves without the use of machine learning algorithms. When artificial intelligence machine fails, the bad results are eventually traced to human error. That of the one who created it.

Supervised Learning Algorithms

Most of what people consider Workplace Artificial Intelligence is closer to programmer-assisted learning. Meaning that the algorithms and goals are predefined ahead of time forecasting algorithms like ARIMA, TBATS, Prophet. Most of applied machine learning (e.g. predictive modeling) is concerned with supervised learning algorithms. These algorithms are divided into following classifications:

Regression

Regression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model. Regression methods are a work horse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. For instance, a solution may be dependent on the outcome at multiple ‘nodes’ where each node may continue to vary for each event – creating an invoice for a logistics service that may have many variables to resolve for each transaction. Really, regression is a process.

Related: Emerging Technology like Artificial Intelligence helps improve Business Intelligence

Regression helps predicting a continuous-valued attribute associated with an object. Its usage includes applications such as Drug response, Stock prices. The most appropriated algorithms to this branch are SVRridge regressionLasso, Ordinary Least Squares, Logistic Regression, Stepwise Regression, Multivariate Adaptive Regression Splines (MARS), and Locally Estimated Scatterplot Smoothing (LOESS).

Regularization Methods

An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing. Regularization methods are popular, powerful and generally simple modifications made to other methods. Examples of such algorithms include Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO) and Elastic Net.

Instance-based Methods

Instance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model. Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances. Some vital algorithms are k-Nearest Neighbour (kNN), Learning Vector Quantization (LVQ) and Self-Organizing Map (SOM).

Classification

Classification algorithms are to identify which category an object belongs to. Its applications include Spam detection, Image recognition. The most popular algorithms in this classification are SVMnearest neighbors, random forest, Classification and Regression Tree (CART), Iterative Dichotomiser 3 (ID3), C4.5, Chi-squared Automatic Interaction Detection (CHARTIFICIAL INTELLIGENCED), Decision Stump, Multivariate Adaptive Regression Splines (MARS), and Gradient Boosting Machines (GBM).

Decision Tree Learning

Decision tree methods construct a model of decisions made based on actual values of attributes in the data. Decisions fork in tree structures until a prediction decision is made for a given record. It will change how taxes will be collected. Artificial Intelligence derived layoffs and taxes are dependent on decision trees that are trained on data for classification and regression problems.

Bayesian

Bayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression. The best suited algorithms are Naive Bayes classifier, Averaged One-Dependence Estimators (AODE) and Bayesian Belief Network (BBN).

Kernel Methods

Kernel Methods are best known for the popular method Support Vector Machines which is really a constellation of methods in and of itself. Kernel Methods are concerned with mapping input data into a higher dimensional vector space where some classification or regression problems are easier to model. The machine learning algorithms are included Support Vector Machines (SVM), Radial Basis Function (RBF) and Linear Discriminate Analysis (LDA).

Clustering Methods

Clustering, like regression describes the class of problem and the class of methods. Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality. Automatic grouping of similar objects into sets is called clustering. Applications such as Customer segmentation, Grouping experiment outcomes are achieved by clustering. The most useful algorithms in this domain are k-Meansspectral clustering, mean-shift, and Expectation Maximization (EM).

Association Rule Learning

Association rule learning are methods that extract rules that best explain observed relationships between variables in data. These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organization. These include Apriori algorithm, and Eclat algorithm.

Dimensionality Reduction

Like clustering methods, Dimensionality Reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarize or describe data using less information. This can be useful to visualize dimensional data or to simplify data which can then be used in a supervised learning method. Reducing the number of random variables to consider helps in visualization, and increased efficiency. The most opted algorithms in this branch are Principal Component Analysis (PCA), feature selectionnon-negative matrix factorization, Partial Least Squares Regression (PLS), Sammon Mapping, Multidimensional Scaling (MDS) and Projection Pursuit.

Ensemble Methods

Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction. Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular because of Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM) and Random Forest machine learning algorithms.

Artificial Neural Networks

Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks. They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

Biological individuals have a logical series of an algorithm, which is regulates their commands and response. These algorithms we may call as a ‘Genetic Material’ as ‘DNA or RNA’; but I would like to see them as an “ALGORITHMS” which is regulates there all activities like responses and commands. Because, particular DNA or RNA sequences have special type of code, which can be used by different performers, here performers are Enzymes.

Reinforcement learning models are a reward or punishment way of learning. This allows to explore and memorize the states of an environment or the actions with a way very similar on how the actual brain learns using the pleasure circuit (TD-Learning). It also has a very useful ability: blocking, which naturally allows a reinforcement learning model to only use the stimuli and information that is useful to predict the reward, the useless stimuli will be “blocked” that is filtered out. This is currently being used in combination with deep learning to model more biologically plausible and powerful neural networks that can for example maybe solve the Go game problem.

Genetic algorithms are best used when facing a very high dimension problem or multimodal optimizations - where you have multiple equally good solutions, aka multiple equilibriums. Also, a big advantage is that genetic algorithms are derivative-free cost optimization methods, so they are very generic and can be applied to virtually any problem and find a good solution - even if other algorithms may find better ones.

Especially, using Machine Learning (ML) techniques to probe statistical regularities and build impeccable data-driven products around neural networks. Recently, there has been an upsurge in the availability of many easy-to-use machine and deep learning packages such as scikit-learn, Weka, Tensorflow etc. Some of the classically popular methods include Perceptron, Back-Propagation, Hopfield Network, Self-Organizing Map (SOM), and Learning Vector Quantization (LVQ).

Deep Learning

Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation. These are concerned with building much larger and more complex neural networks, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data (See Below Table 1: A typical table showing formatted research data available for download).

Deep learning is a growing field of artificial intelligence. It can provide opportunities for a challenging and lucrative career. Especially, if you are interested in computer vision challenges like image recognition, it can provide you the skills you need. The suit of deep learning algorithm comprises on Restricted Boltzmann Machine (RBM), Deep Belief Networks (DBN), Convolutional Network, and Stacked Auto-encoders.

Algorithms and Complex Optimizations

The complexity of Human brain is to process data and information if could be expressed in mathematical terms or any set of systems, then we can almost mimic it using machines.  Machine learning algorithms suffer variance of some degree. This is important for understanding the computational efficiency and scalability of our Machine Learning Algorithm and for exploiting sparsity in our datasets. Knowledge of data structures such as Binary Trees, Hashing, Heap, Stack etc., Dynamic Programming, Randomized & Sublinear Algorithm, Graphs, Gradient/Stochastic Descents and Primal-Dual methods are needed.

Convex Optimization 

There are many reasons why the mathematics of Machine Learning is important. Machine Learning theory is a field that intersects statistical, probabilistic, computer science and algorithmic aspects arising from learning iteratively from data using logistic regression and finding hidden insights which can be used to build intelligent applications.

Probabilistic models, for example, Monte-Carlo, Markov Chains, Markovian processes, Gaussian mixtures, etc. and probabilistic graphical models, for example, Bayesian networks, Credal networks, Markov graphs, etc. are great for uncertain situations since with the help of these we can manipulate uncertain values and hidden variables. Graphical models are kind of close to deep learning, but they are more flexible. It’s easier to define a PGM from a semantic of what you want to do than a deep learning network.

Despite the immense possibilities of Machine and Deep Learning, a thorough mathematical understanding of many of these techniques is necessary for a good grasp of the inner workings of the algorithms and getting good results. There’s a long way before we can control it and then to master it. Artificial personality is still the one to watch out for. We always need to change our skills with optimism and willingness to change. Let’s hope that the people required to make important decisions get more intelligent. When shall machine learning algorithm replace the humans? It will happen. The only question is when.

Conclusion

Artificial Intelligence revolution is going to happen in phases, in existing scenario we are looking at tasks within the current socio- economic structure which are going to get automated. Beyond this phase next level of evolution is going to happen in changed structure which is again dependent on current level of Artificial Intelligence penetration and direction it is going to take. And nobody can say what new economic social structure would be. And sure it will make lot of things redundant and throw up new opportunities as well. Just like it happened over entire century, only difference would be the pace and cycle of changes will be short. At this stage we’re only trying to grasp quantum computing capabilities, much less understand it.

Table 1

Parkinson Disease Spiral Drawings Using Digitized Graphics Tablet Data Set (Isenkul, Sakar, & Kursun, 2014)

Data Set Characteristics:   Multivariate Number of Instances: 77 Area:
Attribute Characteristics: Integer Number of Attributes: 7 Date Donated
Associated Tasks: Classification, Regression, Clustering Missing Values? N/A Number of Web Hits:
Data Set Characteristics:   Multivariate Number of Instances: 77 Area:

Note:  Handwriting database consists of 62 PWP (People with Parkinson) and 15 healthy individuals. Three types of recordings (Static Spiral Test, Dynamic Spiral Test and Stability Test) are taken.

Summary
Article Name
The Best of the Machine Learning Algorithms Used in Artificial Intelligence
Description
Artificial Intelligence is the best answer for tomorrow as our belief in intelligence is losing naturally and gradually. With high confidence, we will observe multiple roles taken over by machines in the next few years: customer service representatives, legal assistants, medical assistants, even primary care physicians and many others. It will start with human augmentation but move pretty rapidly towards human replacement. In this paper, we are discussing different machine learning algorithms used in Artificial Intelligence.
s

Pin It on Pinterest