воскресенье, 10 ноября 2019 г.

Top 12 Artificial Intelligence Tools & Frameworks you need to know


Artificial Intelligence has facilitated the processing of a large amount of data and its use in the industry. The number of tools and frameworks available to data scientists and developers has increased with the growth of AI and ML. This article on Artificial Intelligence Tools & Frameworks will list out some of these in the following sequence:

Artificial Intelligence Tools & Frameworks

Development of neural networks is a long process which requires a lot of thought behind the architecture and a whole bunch of nuances which actually make up the system.
These nuances can easily end up getting overwhelming and not everything can be easily tracked. Hence, the need for such tools arises, where humans handle the major architectural decisions leaving other optimization tasks to such tools. Imagine an architecture with just 4 possible boolean hyperparameters, testing all possible combinations would take 4! Runs. Retraining the same architecture 24 times is definitely not the best use of time and energy.
Also, most of the newer algorithms contain a whole bunch of hyperparameters. Here’s where new tools come into the picture. These tools not only help develop but also, optimize these networks.

List of  AI Tools & Frameworks

From the dawn of mankind, we as a species have always been trying to make things to assist us in day to day tasks. From stone tools to modern day machinery, to tools for making the development of programs to assist us in day to day life. Some of the most important tools and frameworks are:

Scikit Learn

Scikit-learn is one of the most well-known ML libraries. It underpins many administered and unsupervised learning calculations. Precedents incorporate direct and calculated relapses, choice trees, bunching, k-implies, etc.

  • It expands on two essential libraries of Python, NumPy and SciPy.
  • It includes a lot of calculations for regular AI and data mining assignments, including bunching, relapse and order. Indeed, even undertakings like changing information, feature determination and ensemble techniques can be executed in a couple of lines.
  • For a fledgeling in ML, Scikit-learn is a more-than-adequate instrument to work with, until you begin actualizing progressively complex calculations.

Tensorflow

On the off chance that you are in the realm of Artificial Intelligence, you have most likely found out about, attempted or executed some type of profound learning calculation. Is it accurate to say that they are essential? Not constantly. Is it accurate to say that they are cool when done right? Truly! 
The fascinating thing about Tensorflow is that when you compose a program in Python, you can arrange and keep running on either your CPU or GPU. So you don’t need to compose at the C++ or CUDA level to keep running on GPUs. 

It utilizes an arrangement of multi-layered hubs that enables you to rapidly set up, train, and send counterfeit neural systems with huge datasets. This is the thing that enables Google to recognize questions in photographs or comprehend verbally expressed words in its voice-acknowledgment application.

Theano

Theano is wonderfully folded over Keras, an abnormal state neural systems library, that runs nearly in parallel with the Theano library. Keras’ fundamental favorable position is that it is a moderate Python library for profound discovering that can keep running over Theano or TensorFlow.
  • It was created to make actualizing profound learning models as quick and simple as feasible for innovative work.
  • It keeps running on Python 2.7 or 3.5 and can consistently execute on GPUs and CPUs.



What sets Theano separated is that it exploits the PC’s GPU. This enables it to make information escalated counts up to multiple times quicker than when kept running on the CPU alone. Theano’s speed makes it particularly profitable for profound learning and other computationally complex undertakings.

Caffe



‘Caffe’ is a profound learning structure made with articulation, speed, and measured quality as a top priority. It is created by the Berkeley Vision and Learning Center (BVLC) and by network donors. Google’s DeepDream depends on Caffe Framework. This structure is a BSD-authorized C++ library with Python Interface.

MxNet

It allows for trading computation time for memory via ‘forgetful backprop’ which can be very useful for recurrent nets on very long sequences.

  • Built with scalability in mind (fairly easy-to-use support for multi-GPU and multi-machine training).
  • Lots of cool features, like easily writing custom layers in high-level languages
  • Unlike almost all other major frameworks, it is not directly governed by a major corporation which is a healthy situation for an opensource, community-developed framework.
  • TVM support, which will further improve deployment support, and allow running on a whole host of new device types

Keras

If you like the Python-way of doing things, Keras is for you. It is a high-level library for neural networks, using TensorFlow or Theano as its backend. 

The majority of practical problems are more like:
  • picking an architecture suitable for a problem,
  • for image recognition problems – using weights trained on ImageNet,
  • configuring a network to optimize the results (a long, iterative process).
    In all of these, Keras is a gem. Also, it offers an abstract structure which can be easily converted to other frameworks, if needed (for compatibility, performance or anything).

    PyTorch



    CNTK

    CNTK allows users to easily realize and combine popular model types such as feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs). It implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. CNTK is available for anyone to try out, under an open-source license. 

    Auto ML

    Out of all the tools and libraries listed above, Auto ML is probably one of the strongest and a fairly recent addition to the arsenal of tools available at the disposal of a machine learning engineer
    As described in the introduction, optimizations are of the essence in machine learning tasks. While the benefits reaped out of them are lucrative, success in determining optimal hyperparameters is no easy task. This is especially true in the black box like neural networks wherein determining things that matter becomes more and more difficult as the depth of the network increases.

    Thus we enter a new realm of meta, wherein software helps up build software. AutoML is a library which is used by many Machine learning engineers to optimize their models.
    Apart from the obvious time saved, this can also be extremely useful for someone who doesn’t have a lot of experience in the field of machine learning and thus lacks the intuition or past experience to make certain hyperparameter changes by themselves.

    OpenNN

    Jumping from something that is completely beginner friendly to something meant for experienced developers, OpenNN offers an arsenal of advanced analytics.
    It features a tool, Neural Designer for advanced analytics which provides graphs and tables to interpret data entries.

    H20: Open Source AI Platform

    H20 is an open-source deep learning platform. It is an artificial intelligence tool which is business oriented and help them to make a decision from data and enables the user to draw insights. There are two open source versions of it: one is standard H2O and other is paid version Sparkling Water. It can be used for predictive modelling, risk and fraud analysis, insurance analytics, advertising technology, healthcare and customer intelligence.

    Google ML Kit

    Google ML Kit, Google’s machine learning beta SDK for mobile developers, is designed to enable developers to build personalised features on Android and IOS phones. 



    The kit allows developers to embed machine learning technologies with app-based APIs running on the device or in the cloud. These include features such as face and text recognition, barcode scanning, image labelling and more.
    Developers are also able to build their own TensorFlow Lite models in cases where the built-in APIs may not suit the use case.
    With this, we have come to the end of our Artificial Intelligence Tools & Frameworks blog. These were some of the tools that serve as a platform for data scientists and engineers to solve real-life problems which will make the underlying architecture better and more robust.

    Tools of AI

    AI has developed many tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

    Search and optimization

    Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[177] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[178] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[179] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[122] Many learning algorithms use search algorithms based on optimization.
    Simple exhaustive searches[180] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for the path on which the solution lies.[181] Heuristics limit the search for solutions into a smaller sample size.[123]
    A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealingbeam search and random optimization.[182]
    particle swarm seeking the global minimum

    Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithmsgene expression programming, and genetic programming.[183] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[184][185]

    Logic[edit]

    Logic[186] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[187] and inductive logic programming is a method for learning.[188]
    Several different forms of logic are used in AI research. Propositional logic[189] involves truth functions such as "or" and "not". First-order logic[190] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a "degree of truth" (between 0 and 1) to vague statements such as "Alice is old" (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as "if you are close to the destination station and moving fast, increase the train's brake pressure"; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][192][193]
    Default logicsnon-monotonic logics and circumscription[98] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[86] situation calculusevent calculus and fluent calculus (for representing events and time);[87] causal calculus;[88] belief calculus;[194] and modal logics.[89]
    Overall, qualitative symbolic logic is brittle and scales poorly in the presence of noise or other uncertainty. Exceptions to rules are numerous, and it is difficult for logical systems to function in the presence of contradictory rules.[195][196]
    Expectation-maximization clustering of Old Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.

    Probabilistic methods for uncertain reasoning[edit]


    A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

    Neural networks were inspired by the architecture of neurons in the human brain. A simple "neuron" N accepts input from multiple other neurons, each of which, when activated (or "fired"), cast a weighted "vote" for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed "fire together, wire together") is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The neural network forms "concepts" that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning "leg" might be coupled with a subnetwork meaning "foot" that includes the sound for "foot". Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural networks can learn both continuous functions and, surprisingly, digital logical operations. Neural networks' early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k][220] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[221][222]
    The study of non-learning artificial neural networks[210] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouchFrank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich IvakhnenkoTeuvo KohonenStephen GrossbergKunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi AmariBernard WidrowJohn HopfieldEduardo R. Caianiello, and others[citation needed].
    The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptronsmulti-layer perceptrons and radial basis networks.[223] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning ("fire together, wire together"), GMDH or competitive learning.[224]
    Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[225][226] and was introduced to neural networks by Paul Werbos.[227][228][229]
    Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[230]
    To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches[citation needed]. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".[231]

    Deep feedforward neural networks[edit]

    Deep learning is any artificial neural network that can learn a long chain of causal links[dubious ]. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a "credit assignment path" (CAP) depth of seven[citation needed]. Many deep learning systems need to be able to learn chains ten or more causal links in length.[232] Deep learning has transformed many important subfields of artificial intelligence[why?], including computer visionspeech recognitionnatural language processing and others.[233][234][232]
    According to one overview,[235] the expression "Deep Learning" was introduced to the machine learning community by Rina Dechter in 1986[236] and gained traction after Igor Aizenberg and colleagues introduced it to artificial neural networks in 2000.[237] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[238][page needed] These networks are trained one layer at a time. Ivakhnenko's 1971 paper[239] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning.[240] Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[241]
    Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[242] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[243] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[232]
    CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind's "AlphaGo Lee", the program that beat a top Go champion in 2016.[244]

    Deep recurrent neural networks[edit]

    Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[245] which are in theory Turing complete[246] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[232] RNNs can be trained by gradient descent[247][248][249] but suffer from the vanishing gradient problem.[233][250] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[251]
    Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[252] LSTM is often trained by Connectionist Temporal Classification (CTC).[253] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[254][255][256] For example, in 2015, Google's speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[257] Google also used LSTM to improve machine translation,[258] Language Modeling[259] and Multilingual Language Processing.[260] LSTM combined with CNNs also improved automatic image captioning[261] and a plethora of other applications.

    Evaluating progress[edit]

    AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[262] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[263][264] Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[265] Moravec's paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[128]
    Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[266][267] E-sports such as StarCraft continue to provide additional public benchmarks.[268][269] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[270]
    The "imitation game" (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[271] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.[272]
    Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[273][274]




    Комментариев нет:

    Отправить комментарий