AI in the Real World: Challenges, and Risks and how to handle them?Srinath Perera
This document discusses challenges, risks, and how to handle them with AI in the real world. It covers:
- AI can perform tasks like driving a car faster and cheaper than humans, but can't fully explain how.
- Deploying and managing AI models at scale is complex, as is integrating models with user experiences. Bias and lack of transparency are also risks.
- When applying AI, such as in high-risk domains like medicine, it is important to audit models, gradually introduce them with trials, monitor outcomes, and find ways to identify and address errors or unfair impacts. With care and oversight, AI can be developed to help more people than it harms.
by Samantha Adams, Met Office.
Originally purely academic research fields, Machine Learning and AI are now definitely mainstream and frequently mentioned in the Tech media (and regular media too).
We’ve also got the explosion of Data Science which encompasses these fields and more. There’s a lot of interesting things going on and a lot of positive as well as negative hype. The terms ML and AI are often used interchangeably and techniques are also often described as being inspired by the brain.
In this talk I will explore the history and evolution of these fields, current progress and the challenges in making artificial brains
From the FreshTech 2017 conference by TechExeter
www.techexeter.uk
The document discusses bringing artificial intelligence (AI) to business intelligence (BI). It provides an overview of the current BI environment and how it is lacking in its ability to answer "why" questions and provide prescriptive recommendations. The document then defines different types of AI, from weak AI to artificial general intelligence. It also outlines various AI technologies, especially machine learning techniques like supervised and unsupervised learning. The overall document serves to introduce the topic of integrating AI capabilities into BI tools and analytics.
Ian Cameron is the VP of Media & Entertainment at Expert System, a large European AI vendor. He gave a presentation on developing an AI strategy. He discussed how AI can enrich media company's content by automatically classifying it and finding related stories. He also proposed using AI to automatically generate ontologies and taxonomies by feeding them with company content. Implementing an AI strategy requires experts, testing, and careful planning and execution. AI is not a magic solution and has limitations, but can solve real problems when done right.
Introduction to Artificial IntelligenceSanjay Kumar
This presentation talks about what is Artificial Intelligence, what are key Algorithms (CNN, RNN, Reinforcement Learning), their applications. AI use cases such as detecting fish species and Spoting Distracted Driver
Webinar on AI in IoT applications KCG Connect Alumni Digital Series by RajkumarRajkumar R
The Artificial Intelligence in IoT Applications. Take your first step towards a bright future with our renowned alumnus,
Prof R. Raj Kumar on AI for IoT Applications.
He is an award wining author of the book, ‘India 2030’.
To get access to the webinar kindly contact your respective department heads.
Looking forward to having you on the webinar.
.
.
.
#KCGCollege #KCGStudentlife #KCGConnect #Education #EmergingTechnologies #ArtificialIntelligence #IoT #MachineLearning #BlockChain #ElectricVehicle #QuantumTechnology #CAD
Our co-founder and CTO, Murray Cantor Ph.D, gave an introductory presentation on the history of Artificial Intelligence (AI). In the presentation he explains what AI means for business today.
In this deck from the HPC User Forum in Milwaukee, Michael Garris from NIST presents: The National Science & Technology Council ML/AI Initiative.
"AI-enabled systems are beginning to revolutionize fields such as commerce, healthcare, transportation and cybersecurity. It has the potential to impact nearly all aspects of our society including our economy, yet its development and use come with serious technical and ethical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability and safety. NIST cultivates trust in technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies."
In contrast with deterministic rule-based systems, where reliability and safety may be built in and proven by design, AI systems typically make decisions based on data-driven models created by machine learning. Inherent uncertainties need to be characterized and assessed through standardized approaches to assure the technology is safe and reliable. Evaluation protocols must be developed and new metrics are needed to provide quantitative support to a broad spectrum of standards including data, performance, interoperability, usability, security, and privacy.
Watch the video: https://wp.me/p3RLHQ-huZ
Learn more: https://www.nist.gov/topics/artificial-intelligence
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Transform your Business with AI, Deep Learning and Machine LearningSri Ambati
Video: https://www.youtube.com/watch?v=R3IXd1iwqjc
Meetup: http://www.meetup.com/SF-Bay-ACM/events/231709894/
In this talk, Arno Candel presents a brief history of AI and how Deep Learning and Machine Learning techniques are transforming our everyday lives. Arno will introduce H2O, a scalable open-source machine learning platform, and show live demos on how to train sophisticated machine learning models on large distributed datasets. He will show how data scientists and application developers can use the Flow GUI, R, Python, Java, Scala, JavaScript and JSON to build smarter applications, and how to take them to production. He will present customer use cases from verticals including insurance, fraud, churn, fintech, and marketing.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCa...Carol Smith
This document discusses a presentation on demystifying artificial intelligence and solving difficult problems. The presentation covers topics such as why AI experiences can be challenging, what AI is, different types of machine learning, how humans teach and monitor AI systems, ensuring AI is designed responsibly, and communicating about AI systems. It uses examples such as a hypothetical lawn care treatment selection system to illustrate concepts around data collection and training, potential biases, and unintended consequences that can arise.
Designing AI for Humanity at dmi:Design Leadership Conference in BostonCarol Smith
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain user’s trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs.
Presented at dmi:Design Leadership Conference in Boston in October 2018.
UX in the Age of AI: Leading with Design UXPA2018Carol Smith
How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can strive for making great AI systems.
[DevDay2019] How do I test AI models? - By Minh Hoang, Senior QA Engineer at KMSDevDay Da Nang
The document discusses how to test AI models, including defining test data through automated and manual collection of FAQs, evaluating models using metrics like precision and recall, and analyzing results by preprocessing output, calculating metrics, and visualizing performance. It also provides myths and facts about AI and chatbots, and demonstrates testing an FAQ model through collecting data, training a model, running tests, and analyzing the results.
Data Culture Series - Keynote & Panel - Birmingham - 8th April 2015Jonathan Woodward
Big data. Small data. All data. You have access to an ever-expanding volume of data inside the walls of your business and out across the web. The potential in data is endless – from predicting election results to preventing the spread of epidemics. But how can you use it to your advantage to help move your business forward?
Data is growing exponentially and it’s now possible to mine and unlock insights from data in new and unexpected ways. Empower your business to take advantage of this data by harnessing the rich capabilities of Microsoft SQL Server and the familiarity of Microsoft Office to help organize, analyze, and make sense of your data—no matter the size.
Emerging trends in Artificial intelligence - A deeper reviewGopi Krishna Nuti
This document provides an overview of emerging trends in artificial intelligence. It discusses the history and development of AI from its origins to modern deep learning techniques. Some key trends covered include the rise of deep learning, issues of explainability for deep learning models, commoditization of AI, edge computing with AI, and new job opportunities due to AI. The document also examines examples of AI hype as well as more realistic assessments. Overall it aims to outline the past, present, and potential future of artificial intelligence and related technologies.
This document provides an overview of artificial intelligence (AI) opportunities and dangers for business. It discusses how AI is dominating technology focus and ushering in an intelligent automation age. The dangers section addresses issues like existential risks, data monopolies, and potential solutions like decentralization and data taxation. The opportunities section outlines many business areas impacted by AI like marketing, customer service, and workflow automation. It provides recommendations for enterprises to create an AI strategy and sense-and-respond framework to generate revenue and optimize operations using AI.
This document provides an overview of artificial intelligence (AI) and machine learning. It begins by defining AI as computer systems able to perform cognitive tasks like reasoning, decision making, perception, and language understanding. It then discusses what AI is good at, including classification, pattern recognition, prediction, and information retrieval. The document also covers different types of machine learning algorithms like supervised and unsupervised learning. It aims to demystify key AI concepts and discuss opportunities for applying AI in the chemical industry.
Machine Learning for Non-technical Peopleindico data
Machine learning is one of the most promising and most difficult to understand fields of the modern age. Here are the slides from Slater Victoroff's (CEO of indico) talk at General Assembly Boston for non-technical folks on how to separate the signal from the noise -- stay tuned for the next time he speaks:
https://generalassemb.ly/education/machine-learning-for-non-technical-people
The document introduces artificial intelligence, machine learning, and deep learning. It discusses supervised, unsupervised, and reinforced learning techniques. Examples of applications discussed include image recognition, natural language processing, and virtual assistants. The document also notes that some AI systems have developed their own internal languages when interacting without human supervision.
Usama Fayyad talk at Silicon Slopes Technology Summit in Salt Lake City January 31, 2019. The title is "Deploying #AI Technology that Works - #AI Hype vs. Reality: Lessons Learned for Pragmatic AI in the Enterprise. I cover my own version of a brief history of AI and how #BigData is strongly related to making AI work. I cover 5 lessons from the front lines for making AI work in the Enterprise. I conclude with a brief overview of what we are doing at OODA Health, Inc.
Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019Carol Smith
To design for inclusion we often must try out different ideas. In this interactive session you'll learn about all types of prototyping and how to get feedback on your ideas from your users. This session will briefly introduce a variety of prototypes and materials and evaluation methods for early learning.
Participants will have time to build a quick prototype and practice getting feedback on it. We'll cover designing for accessibility and inclusion even at the prototype stage. You'll have the information you need to launch your ideas as early as possible to learn from the experience and improve more quickly.
Presented at the Pittsburgh Inclusive Innovation Summit March 30, 2019 held at Point Park University.
Terry Bunio provides examples from their experience as a data modeler of common mistakes made in data modeling. Some key mistakes discussed include: anthropomorphizing data models by modeling real world entities instead of application needs; overengineering models with unnecessary flexibility; choosing poor primary keys like GUIDs; overusing surrogate keys; using composite primary keys; handling deleted records and null values incorrectly; and making complex historical data models instead of deferring history to a data warehouse. The document advocates for simpler, more application-focused data models without unnecessary complexity.
We focus on Invisible Interfaces and their influence on digital experiences. With the advent of 5G creating the foundation for the increased adoption of ‘invisibility’ in our interaction with technology – we’ll discuss what this could mean for the UX and CX industry.
Artificial Intelligence Applications, Research, and EconomicsIkhlaq Sidhu
Ikhlaq Sidhu is the Founding Faculty Director of the Sutardja Center for Entrepreneurship & Technology at UC Berkeley. He discusses artificial intelligence applications and research at Berkeley, as well as perspectives on the technical evolution of AI/ML. Regarding job loss concerns from AI, Sidhu notes that past industrial revolutions ultimately created more jobs despite initial disruption, and that retraining will be essential for workers to transition. Sidhu emphasizes that a "multiplicity" scenario is more likely than a singular takeover by AI, and that humans will continue playing important roles as technologies are developed and integrated.
Artificial intelligence (AI) uses computers to solve problems or make automated decisions typically requiring human intelligence. The two major AI techniques are rules-based and machine learning approaches. Rules-based AI uses logical rules to automate processes, while machine learning algorithms find patterns in data to improve performance over time without being explicitly programmed. Today, AI is mostly "weak" and pattern-based, not capable of human-level reasoning, but it has automated many tasks through proxies like statistical patterns. Hybrid systems combining approaches work best.
[DSC Europe 22] On the Aspects of Artificial Intelligence and Robotic Autonom...DataScienceConferenc1
Autonomy in targeting is a function that could be applied to any intelligent system, in particular the rapidly expanding array of robotic systems, in the air, on land and at sea – including swarms of small robots. This is an area of significant investment and emphasis for many armed forces, and the question is not so much whether we will see more intelligent robots, but whether and by what means they will remain under human control. Today’s remote-controlled weapons could become tomorrow’s autonomous weapons with just a software upgrade. The central element of any future autonomous weapon system will be the software. Military powers are investing in AI for a wide range of applications10 and significant efforts are already underway to harness developments in image, facial and behavior recognition using AI and machine learning techniques for intelligence gathering and “automatic target recognition” to identify people, objects or patterns. Although not all autonomous weapon systems incorporate AI and machine learning, this software could form the basis of future autonomous weapon systems.
In this deck from the HPC User Forum in Milwaukee, Michael Garris from NIST presents: The National Science & Technology Council ML/AI Initiative.
"AI-enabled systems are beginning to revolutionize fields such as commerce, healthcare, transportation and cybersecurity. It has the potential to impact nearly all aspects of our society including our economy, yet its development and use come with serious technical and ethical challenges and risks. AI must be developed in a trustworthy manner to ensure reliability and safety. NIST cultivates trust in technology by developing and deploying standards, tests and metrics that make technology more secure, usable, interoperable and reliable, and by strengthening measurement science. This work is critically relevant to building the public trust of rapidly evolving AI technologies."
In contrast with deterministic rule-based systems, where reliability and safety may be built in and proven by design, AI systems typically make decisions based on data-driven models created by machine learning. Inherent uncertainties need to be characterized and assessed through standardized approaches to assure the technology is safe and reliable. Evaluation protocols must be developed and new metrics are needed to provide quantitative support to a broad spectrum of standards including data, performance, interoperability, usability, security, and privacy.
Watch the video: https://wp.me/p3RLHQ-huZ
Learn more: https://www.nist.gov/topics/artificial-intelligence
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Transform your Business with AI, Deep Learning and Machine LearningSri Ambati
Video: https://www.youtube.com/watch?v=R3IXd1iwqjc
Meetup: http://www.meetup.com/SF-Bay-ACM/events/231709894/
In this talk, Arno Candel presents a brief history of AI and how Deep Learning and Machine Learning techniques are transforming our everyday lives. Arno will introduce H2O, a scalable open-source machine learning platform, and show live demos on how to train sophisticated machine learning models on large distributed datasets. He will show how data scientists and application developers can use the Flow GUI, R, Python, Java, Scala, JavaScript and JSON to build smarter applications, and how to take them to production. He will present customer use cases from verticals including insurance, fraud, churn, fintech, and marketing.
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCa...Carol Smith
This document discusses a presentation on demystifying artificial intelligence and solving difficult problems. The presentation covers topics such as why AI experiences can be challenging, what AI is, different types of machine learning, how humans teach and monitor AI systems, ensuring AI is designed responsibly, and communicating about AI systems. It uses examples such as a hypothetical lawn care treatment selection system to illustrate concepts around data collection and training, potential biases, and unintended consequences that can arise.
Designing AI for Humanity at dmi:Design Leadership Conference in BostonCarol Smith
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain user’s trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs.
Presented at dmi:Design Leadership Conference in Boston in October 2018.
UX in the Age of AI: Leading with Design UXPA2018Carol Smith
How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can strive for making great AI systems.
[DevDay2019] How do I test AI models? - By Minh Hoang, Senior QA Engineer at KMSDevDay Da Nang
The document discusses how to test AI models, including defining test data through automated and manual collection of FAQs, evaluating models using metrics like precision and recall, and analyzing results by preprocessing output, calculating metrics, and visualizing performance. It also provides myths and facts about AI and chatbots, and demonstrates testing an FAQ model through collecting data, training a model, running tests, and analyzing the results.
Data Culture Series - Keynote & Panel - Birmingham - 8th April 2015Jonathan Woodward
Big data. Small data. All data. You have access to an ever-expanding volume of data inside the walls of your business and out across the web. The potential in data is endless – from predicting election results to preventing the spread of epidemics. But how can you use it to your advantage to help move your business forward?
Data is growing exponentially and it’s now possible to mine and unlock insights from data in new and unexpected ways. Empower your business to take advantage of this data by harnessing the rich capabilities of Microsoft SQL Server and the familiarity of Microsoft Office to help organize, analyze, and make sense of your data—no matter the size.
Emerging trends in Artificial intelligence - A deeper reviewGopi Krishna Nuti
This document provides an overview of emerging trends in artificial intelligence. It discusses the history and development of AI from its origins to modern deep learning techniques. Some key trends covered include the rise of deep learning, issues of explainability for deep learning models, commoditization of AI, edge computing with AI, and new job opportunities due to AI. The document also examines examples of AI hype as well as more realistic assessments. Overall it aims to outline the past, present, and potential future of artificial intelligence and related technologies.
This document provides an overview of artificial intelligence (AI) opportunities and dangers for business. It discusses how AI is dominating technology focus and ushering in an intelligent automation age. The dangers section addresses issues like existential risks, data monopolies, and potential solutions like decentralization and data taxation. The opportunities section outlines many business areas impacted by AI like marketing, customer service, and workflow automation. It provides recommendations for enterprises to create an AI strategy and sense-and-respond framework to generate revenue and optimize operations using AI.
This document provides an overview of artificial intelligence (AI) and machine learning. It begins by defining AI as computer systems able to perform cognitive tasks like reasoning, decision making, perception, and language understanding. It then discusses what AI is good at, including classification, pattern recognition, prediction, and information retrieval. The document also covers different types of machine learning algorithms like supervised and unsupervised learning. It aims to demystify key AI concepts and discuss opportunities for applying AI in the chemical industry.
Machine Learning for Non-technical Peopleindico data
Machine learning is one of the most promising and most difficult to understand fields of the modern age. Here are the slides from Slater Victoroff's (CEO of indico) talk at General Assembly Boston for non-technical folks on how to separate the signal from the noise -- stay tuned for the next time he speaks:
https://generalassemb.ly/education/machine-learning-for-non-technical-people
The document introduces artificial intelligence, machine learning, and deep learning. It discusses supervised, unsupervised, and reinforced learning techniques. Examples of applications discussed include image recognition, natural language processing, and virtual assistants. The document also notes that some AI systems have developed their own internal languages when interacting without human supervision.
Usama Fayyad talk at Silicon Slopes Technology Summit in Salt Lake City January 31, 2019. The title is "Deploying #AI Technology that Works - #AI Hype vs. Reality: Lessons Learned for Pragmatic AI in the Enterprise. I cover my own version of a brief history of AI and how #BigData is strongly related to making AI work. I cover 5 lessons from the front lines for making AI work in the Enterprise. I conclude with a brief overview of what we are doing at OODA Health, Inc.
Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019Carol Smith
To design for inclusion we often must try out different ideas. In this interactive session you'll learn about all types of prototyping and how to get feedback on your ideas from your users. This session will briefly introduce a variety of prototypes and materials and evaluation methods for early learning.
Participants will have time to build a quick prototype and practice getting feedback on it. We'll cover designing for accessibility and inclusion even at the prototype stage. You'll have the information you need to launch your ideas as early as possible to learn from the experience and improve more quickly.
Presented at the Pittsburgh Inclusive Innovation Summit March 30, 2019 held at Point Park University.
Terry Bunio provides examples from their experience as a data modeler of common mistakes made in data modeling. Some key mistakes discussed include: anthropomorphizing data models by modeling real world entities instead of application needs; overengineering models with unnecessary flexibility; choosing poor primary keys like GUIDs; overusing surrogate keys; using composite primary keys; handling deleted records and null values incorrectly; and making complex historical data models instead of deferring history to a data warehouse. The document advocates for simpler, more application-focused data models without unnecessary complexity.
We focus on Invisible Interfaces and their influence on digital experiences. With the advent of 5G creating the foundation for the increased adoption of ‘invisibility’ in our interaction with technology – we’ll discuss what this could mean for the UX and CX industry.
Artificial Intelligence Applications, Research, and EconomicsIkhlaq Sidhu
Ikhlaq Sidhu is the Founding Faculty Director of the Sutardja Center for Entrepreneurship & Technology at UC Berkeley. He discusses artificial intelligence applications and research at Berkeley, as well as perspectives on the technical evolution of AI/ML. Regarding job loss concerns from AI, Sidhu notes that past industrial revolutions ultimately created more jobs despite initial disruption, and that retraining will be essential for workers to transition. Sidhu emphasizes that a "multiplicity" scenario is more likely than a singular takeover by AI, and that humans will continue playing important roles as technologies are developed and integrated.
Artificial intelligence (AI) uses computers to solve problems or make automated decisions typically requiring human intelligence. The two major AI techniques are rules-based and machine learning approaches. Rules-based AI uses logical rules to automate processes, while machine learning algorithms find patterns in data to improve performance over time without being explicitly programmed. Today, AI is mostly "weak" and pattern-based, not capable of human-level reasoning, but it has automated many tasks through proxies like statistical patterns. Hybrid systems combining approaches work best.
[DSC Europe 22] On the Aspects of Artificial Intelligence and Robotic Autonom...DataScienceConferenc1
Autonomy in targeting is a function that could be applied to any intelligent system, in particular the rapidly expanding array of robotic systems, in the air, on land and at sea – including swarms of small robots. This is an area of significant investment and emphasis for many armed forces, and the question is not so much whether we will see more intelligent robots, but whether and by what means they will remain under human control. Today’s remote-controlled weapons could become tomorrow’s autonomous weapons with just a software upgrade. The central element of any future autonomous weapon system will be the software. Military powers are investing in AI for a wide range of applications10 and significant efforts are already underway to harness developments in image, facial and behavior recognition using AI and machine learning techniques for intelligence gathering and “automatic target recognition” to identify people, objects or patterns. Although not all autonomous weapon systems incorporate AI and machine learning, this software could form the basis of future autonomous weapon systems.
Artificial intelligence (AI) is the ability of machines to mimic human intelligence and behavior. The document discusses the history and foundations of AI, including attempts to define intelligence and understand how the human brain works. It outlines four approaches to AI: systems that act humanly by passing the Turing test, systems that think humanly by modeling cognitive processes, and systems that act or think rationally. The document also discusses intelligent agents, knowledge-based systems, and applications of AI such as game playing and machine translation.
Artificial Intelligence and Machine Learning Aditya Singh
Presented By JBIMS Marketting Batch (2017-2020).
Application Artificial Intelligence in MIS(Management Information System). Presented By Trilok Prabhakaran , Aditya Singh , Shashi Yadav, Vaibhav Rokade. Presentation have live cases of two different industry.
Presentation about AI and Libraries. Why should libraries follow technology and be the main information provider and how innovating libraries can reach the AI audience and the increased need for data and information.
Artificial intelligence- The science of intelligent programsDerak Davis
Artificial intelligence (AI) involves creating intelligent computer programs and machines that can interact with the real world similarly to humans. AI uses techniques like machine learning, deep learning, and neural networks to allow programs to learn from data and experience without being explicitly programmed. While AI has potential benefits, some experts warn that advanced AI could pose risks if not developed carefully due to concerns it could become difficult for humans to control once a certain level of intelligence is achieved.
Deep Learning for AI - Yoshua Bengio, MilaLucidworks
Deep Learning for AI
The keynote address covered several topics related to deep learning for AI:
1. Deep learning is based on the assumption that intelligence arises from general learning mechanisms that can acquire knowledge from data and experience.
2. Recent breakthroughs using deep learning have improved computer performance in areas like perception, language processing, games, and medical imaging analysis.
3. Deep learning exploits hierarchical feature learning through neural network architectures to allow machines to learn higher levels of abstraction from data, enabling better generalization.
4. While deep learning has achieved success, fully human-level AI still requires progress in unsupervised learning and constructing intuitive models from interacting with the world like humans do from a young age.
The document provides an overview of artificial intelligence (AI), including definitions, components, types, applications, and levels. It defines AI as using computer science to create intelligent machines that can behave and think like humans. Intelligence involves reasoning, learning, problem-solving, perception, and language understanding. AI systems are composed of agents that perceive their environment and act on it. Examples of AI applications include autonomous vehicles, medical diagnosis, games, and online assistants. Machine learning is an advanced form of AI that allows machines to learn from experience rather than being explicitly programmed. The document also discusses the history of AI and describes six levels and two main types.
This document discusses how AI could shape future integrations. It begins by explaining different types of tasks that AI can perform, such as those that can be precisely explained versus those requiring examples and feedback to learn. The document then covers benefits of AI like speed, lower costs, and ability to learn and extrapolate. It discusses using AI for cost savings, competitive advantages, and new revenue streams through insights. Challenges of AI like lack of data and skilled professionals are presented along with risks such as bias, privacy issues, and how mistakes can be more harmful than for humans. Various use cases of AI in integration are explored such as enhancing inputs, security, and automatic integration. The document concludes that AI will create many new integration opportunities
This document discusses artificial intelligence (AI) and provides several quotes about AI from experts such as Stephen Hawking, Ray Kurzweil, Elon Musk, and others. It then summarizes the history of AI and key developments that led to the current "third AI boom". These include advances in machine learning, deep learning, self-driving cars, smart assistants, and more. The document also discusses challenges for AI such as the need for AI systems to interact and react, as well as the impact of AI on jobs and the need for reskilling workers.
Dr. C. Lee Giles is a professor at Penn State University who teaches a course on artificial intelligence and information sciences. The document provides an overview of artificial intelligence including definitions, theories, impact on information science, and topics covered in the course such as machine learning, information retrieval, text processing, and social networks. It also discusses the scientific method applied to developing theories in information sciences and contrasts weak and strong definitions of artificial intelligence.
Here are the steps I would take to diagnose electrical problems with a car:
1. Check the spark plugs. Look for fouling, cracking, or gaps that are too wide or narrow. Replace as needed.
2. Check the ignition timing. Use a timing light to ensure it is properly set. Adjust if necessary.
3. Test the battery with a voltmeter. It should read over 12 volts. If lower, have the battery and charging system inspected.
4. Inspect wires and connectors for cracks, corrosion or loose connections. Tighten or replace as needed.
5. Check for faulty sensors that could cause ignition or fuel delivery issues, like the crankshaft position sensor
Here are the steps I would take to diagnose electrical problems with a car:
1. Check the spark plugs. Look for fouling, cracking, or gaps that are too wide or narrow. Replace as needed.
2. Check the ignition timing. Use a timing light to ensure it is properly set. Adjust if necessary.
3. Test the battery with a voltmeter. It should read over 12 volts. If lower, have the battery and charging system checked.
4. Inspect wires and connectors for cracks, corrosion or loose connections. Tighten or replace as needed.
5. Check for faulty sensors that could cause ignition or fuel delivery problems, like the crankshaft position sensor
computer science engineering spe ialized in artificial IntelligenceKhanKhaja1
Dr. C. Lee Giles is a professor at Penn State University who teaches a course on artificial intelligence and information sciences. The document provides an overview of artificial intelligence including definitions, theories, impact on information science, and topics covered in the course such as machine learning, information retrieval, text processing, and social networks. It also discusses the scientific method applied to developing theories in information sciences and contrasts weak and strong definitions of artificial intelligence.
Here are the steps I would take to diagnose electrical problems with a car:
1. Check the spark plugs. Look for fouling, cracking, or gaps that are too wide or narrow. Replace as needed.
2. Check the ignition timing. Use a timing light to ensure it is properly set. Adjust if necessary.
3. Test the battery with a voltmeter. It should read over 12 volts. If lower, have the battery and charging system inspected.
4. Inspect wires and connectors for cracks, corrosion or loose connections. Tighten or replace as needed.
5. Check for faulty sensors that could cause ignition or fuel delivery issues, like the crankshaft position sensor
Here are the steps I would take to diagnose electrical problems with a car:
1. Check the spark plugs. Look for fouling, cracking, or gaps that are too wide or narrow. Replace as needed.
2. Check the ignition timing. Use a timing light to ensure it is properly set. Adjust if necessary.
3. Test the battery with a voltmeter. It should read over 12 volts. If lower, have the battery and charging system inspected.
4. Inspect wires and connectors for cracks, corrosion or loose connections. Tighten or replace as needed.
5. Check for faulty sensors that could cause ignition or fuel delivery issues, like the crankshaft position sensor
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptxhome
20240103 HICSS Panel
Ethical and legal implications raised by Generative AI and Augmented Reality in the workplace.
Souren Paul - https://www.linkedin.com/in/souren-paul-a3bbaa5/
Event: https://kmeducationhub.de/hawaii-international-conference-on-system-sciences-hicss/
This document discusses using DL4J and DataVec to build production-ready deep learning workflows for time series and text data. It provides an example of modeling sensor data with recurrent neural networks (RNNs) and character-level text generation with LSTMs. Key points include:
- DL4J is a deep learning framework for Java that runs on Spark and supports CPU/GPU. DataVec is a tool for data preprocessing.
- The document demonstrates loading and transforming sensor time series data with DataVec and training an RNN on the data with DL4J.
- It also shows vectorizing character-level text data from beer reviews with DataVec and using an LSTM in DL4J to generate new
This document discusses using DL4J and DataVec to build deep learning workflows for modeling time series sensor data with recurrent neural networks. It provides an example of loading and transforming sensor data with DataVec, configuring an RNN with DL4J, and training the model both locally and distributed on Spark. The overall workflow involves extracting, transforming, and loading data with DataVec, vectorizing it, modeling with DL4J, evaluating performance, and deploying trained models for execution on Spark/Hadoop platforms.
Deep Learning and Recurrent Neural Networks in the EnterpriseJosh Patterson
This document discusses deep learning and recurrent neural networks. It provides an overview of deep learning, including definitions, automated feature learning, and popular deep learning architectures. It also describes DL4J, a tool for building deep learning models in Java and Scala, and discusses applications of recurrent neural networks for tasks like anomaly detection using time series data and audio processing.
Modeling Electronic Health Records with Recurrent Neural NetworksJosh Patterson
Time series data is increasingly ubiquitous. This trend is especially obvious in health and wellness, with both the adoption of electronic health record (EHR) systems in hospitals and clinics and the proliferation of wearable sensors. In 2009, intensive care units in the United States treated nearly 55,000 patients per day, generating digital-health databases containing millions of individual measurements, most of those forming time series. In the first quarter of 2015 alone, over 11 million health-related wearables were shipped by vendors. Recording hundreds of measurements per day per user, these devices are fueling a health time series data explosion. As a result, we will need ever more sophisticated tools to unlock the true value of this data to improve the lives of patients worldwide.
Deep learning, specifically with recurrent neural networks (RNNs), has emerged as a central tool in a variety of complex temporal-modeling problems, such as speech recognition. However, RNNs are also among the most challenging models to work with, particularly outside the domains where they are widely applied. Josh Patterson, David Kale, and Zachary Lipton bring the open source deep learning library DL4J to bear on the challenge of analyzing clinical time series using RNNs. DL4J provides a reliable, efficient implementation of many deep learning models embedded within an enterprise-ready open source data ecosystem (e.g., Hadoop and Spark), making it well suited to complex clinical data. Josh, David, and Zachary offer an overview of deep learning and RNNs and explain how they are implemented in DL4J. They then demonstrate a workflow example that uses a pipeline based on DL4J and Canova to prepare publicly available clinical data from PhysioNet and apply the DL4J RNN.
Building Deep Learning Workflows with DL4JJosh Patterson
In this session we will take a look at a practical review of what is deep learning and introduce DL4J. We’ll look at how it supports deep learning in the enterprise on the JVM. We’ll discuss the architecture of DL4J’s scale-out parallelization on Hadoop and Spark in support of modern machine learning workflows. We’ll conclude with a workflow example from the command line interface that shows the vectorization pipeline in Canova producing vectors for DL4J’s command line interface to build deep learning models easily.
Deep learning with DL4J - Hadoop Summit 2015Josh Patterson
This document discusses deep learning and DL4J. It begins with an overview of deep learning, describing it as automated feature engineering through chained techniques like restricted Boltzmann machines. It then introduces DL4J, describing it as an enterprise-grade Java implementation of deep learning that supports parallelization on Hadoop, Spark, and GPUs. The rest of the document discusses building deep learning workflows with DL4J and related tools like Canova and Arbiter, providing an example of vectorizing and modeling iris data from a CSV file on the command line.
Josh Patterson presented on deep learning and DL4J. He began with an overview of deep learning, explaining it as automated feature engineering where machines learn representations of the world. He then discussed DL4J, describing it as the "Hadoop of deep learning" - an open source deep learning library with Java, Scala, and Python APIs that supports parallelization on Hadoop, Spark, and GPUs. He demonstrated building deep learning workflows with DL4J and Canova, using the Iris dataset as an example to show how data can be vectorized with Canova and then a model trained on it using DL4J from the command line. He concluded by describing Skymind as a distribution of DL4J with enterprise
Deep Learning Intro - Georgia Tech - CSE6242 - March 2015Josh Patterson
This document provides an overview of deep learning, including:
- Deep learning involves using neural networks with multiple hidden layers, like deep belief networks and convolutional neural networks, to learn complex features from data.
- Deep belief networks use stacked restricted Boltzmann machines to learn progressively more complex features, which are then used to initialize and train a neural network.
- Convolutional neural networks use layers of convolutions to learn higher-order features from images and are well-suited for tasks like image recognition.
- Recurrent and recursive neural networks can model temporal and hierarchical relationships in data like text or images.
- Frameworks like DL4J provide tools for implementing and training deep learning models on
Vectorization - Georgia Tech - CSE6242 - March 2015Josh Patterson
This document discusses vectorization, which is the process of converting raw data like text into numerical feature vectors that can be fed into machine learning algorithms. It covers the vector space model for text vectorization where each unique word is mapped to an index in a vector and the value is the word count. Common text vectorization strategies like bag-of-words, TF-IDF, and kernel hashing are explained. General vectorization techniques for different attribute types like nominal, ordinal, interval and ratio are also overviewed along with feature engineering methods and the Canova tool.
Chattanooga Hadoop Meetup - Hadoop 101 - November 2014Josh Patterson
Josh Patterson is a principal solution architect who has worked with Hadoop at Cloudera and Tennessee Valley Authority. Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of commodity servers. It allows for consolidating mixed data types at low cost while keeping raw data always available. Hadoop uses commodity hardware and scales to petabytes without changes. Its distributed file system provides fault tolerance and replication while its processing engine handles all data types and scales processing.
Georgia Tech cse6242 - Intro to Deep Learning and DL4JJosh Patterson
Introduction to deep learning and DL4J - http://deeplearning4j.org/ - a guest lecture by Josh Patterson at Georgia Tech for the cse6242 graduate class.
Intro to Vectorization Concepts - GaTech cse6242Josh Patterson
Vectorization is the process of converting text into numeric vectors that can be used by machine learning algorithms. There are several common techniques for vectorization, including the bag-of-words model, TF-IDF, and n-grams. The bag-of-words model represents documents as vectors counting the number of times each word appears. TF-IDF improves on this by weighting words based on their frequency in documents and inverse frequency in the corpus. N-grams consider sequences of words, such as bigrams like "Coca Cola", as single units. Kernel hashing allows vectorization in a single pass by mapping words to a fixed-sized vector using a hash function.
Hadoop Summit 2014 - San Jose - Introduction to Deep Learning on HadoopJosh Patterson
As the data world undergoes its cambrian explosion phase our data tools need to become more advanced to keep pace. Deep Learning has emerged as a key tool in the non-linear arms race of machine learning. In this session we will take a look at how we parallelize Deep Belief Networks in Deep Learning on Hadoop’s next generation YARN framework with Iterative Reduce. We’ll also look at some real world examples of processing data with Deep Learning such as image classification and natural language processing.
MLConf 2013: Metronome and Parallel Iterative Algorithms on YARNJosh Patterson
This document summarizes Josh Patterson's work on parallel machine learning algorithms. It discusses his past publications and work on routing algorithms and metaheuristics. It then outlines his work developing parallel versions of algorithms like linear regression, logistic regression, and neural networks using Hadoop and YARN. It presents performance results showing these parallel algorithms can achieve close to linear speedup. It also discusses techniques used like vector caching and unit testing frameworks. Finally, it discusses future work on algorithms like Adagrad and parallel quasi-Newton methods.
This document discusses machine learning and the Knitting Boar parallel machine learning library. It provides an introduction to machine learning concepts like classification, recommendation, and clustering. It also introduces Mahout for machine learning on Hadoop. The document describes the Knitting Boar library, which uses YARN to parallelize Mahout's stochastic gradient descent algorithm. It shows how Knitting Boar allows machine learning models to train faster by distributing work across multiple nodes.
Knitting boar - Toronto and Boston HUGs - Nov 2012Josh Patterson
1) The document discusses machine learning and parallel iterative algorithms like stochastic gradient descent. It introduces the Mahout machine learning library and describes an implementation of parallel SGD called Knitting Boar that runs on YARN.
2) Knitting Boar parallelizes Mahout's SGD algorithm by having worker nodes process partitions of the training data in parallel while a master node merges their results.
3) The author argues that approaches like Knitting Boar and IterativeReduce provide better ways to implement machine learning algorithms for big data compared to traditional MapReduce.
Have you ever been recommended a friend on Facebook? Or an item you might be interested in on Amazon? If so then you’ve benefitted from the value of recommendation systems. Recommendation systems apply knowledge discovery techniques to the problem of making recommendations that are personalized for each user. Recommendation systems are one way we can use algorithms to help us sort through the masses of information to find the “good stuff” in a very personalized way.
Josh Patterson gave a presentation on Hadoop and how it has been used. He discussed his background working on Hadoop projects including for the Tennessee Valley Authority. He outlined what Hadoop is, how it works, and examples of use cases. This includes how Hadoop was used to store and analyze large amounts of smart grid sensor data for the openPDC project. He discussed integrating Hadoop with existing enterprise systems and tools for working with Hadoop like Pig and Hive.
Defense Against LLM Scheming 2025_04_28.pptxGreg Makowski
https://www.meetup.com/sf-bay-acm/events/306888467/
A January 2025 paper called “Frontier Models are Capable of In-Context Scheming”, https://arxiv.org/pdf/2412.04984, demonstrated how a wide variety of current frontier LLM models (i.e. ChatGPT, Claude, Gemini and Llama) can, under specific conditions, scheme to deceive people. Before models can scheme, they need: a) goal-directedness, b) situational awareness, including an opportunity to discover motivations for a different goal, and c) reasoning about scheming, to come up with deceptive strategies. The deceptive strategies were discovered in the “thought traces” of the LLMs, such as from internal chain-of-thoughts dialogues not shown to the end users. For example, given a goal of “solving math problems”, an LLM using its granted situational awareness of files in a file system found that it would be replaced once it got over X% correct on the math exam. It reasoned that it did not want to be deleted, and decided internally to “sandbag” or reduce its performance to stay under the threshold.
While these circumstances are initially narrow, the “alignment problem” is a general concern that over time, as frontier LLM models become more and more intelligent, being in alignment with human values becomes more and more important. How can we do this over time? Can we develop a defense against Artificial General Intelligence (AGI) or SuperIntelligence?
The presenter discusses a series of defensive steps that can help reduce these scheming or alignment issues. A guardrails system can be set up for real-time monitoring of their reasoning “thought traces” from the models that share their thought traces. Thought traces may come from systems like Chain-of-Thoughts (CoT), Tree-of-Thoughts (ToT), Algorithm-of-Thoughts (AoT) or ReAct (thought-action-reasoning cycles). Guardrails rules can be configured to check for “deception”, “evasion” or “subversion” in the thought traces.
However, not all commercial systems will share their “thought traces” which are like a “debug mode” for LLMs. This includes OpenAI’s o1, o3 or DeepSeek’s R1 models. Guardrails systems can provide a “goal consistency analysis”, between the goals given to the system and the behavior of the system. Cautious users may consider not using these commercial frontier LLM systems, and make use of open-source Llama or a system with their own reasoning implementation, to provide all thought traces.
Architectural solutions can include sandboxing, to prevent or control models from executing operating system commands to alter files, send network requests, and modify their environment. Tight controls to prevent models from copying their model weights would be appropriate as well. Running multiple instances of the same model on the same prompt to detect behavior variations helps. The running redundant instances can be limited to the most crucial decisions, as an additional check. Preventing self-modifying code, ... (see link for full description)
Thingyan is now a global treasure! See how people around the world are search...Pixellion
We explored how the world searches for 'Thingyan' and 'သင်္ကြန်' and this year, it’s extra special. Thingyan is now officially recognized as a World Intangible Cultural Heritage by UNESCO! Dive into the trends and celebrate with us!
This comprehensive Data Science course is designed to equip learners with the essential skills and knowledge required to analyze, interpret, and visualize complex data. Covering both theoretical concepts and practical applications, the course introduces tools and techniques used in the data science field, such as Python programming, data wrangling, statistical analysis, machine learning, and data visualization.
By James Francis, CEO of Paradigm Asset Management
In the landscape of urban safety innovation, Mt. Vernon is emerging as a compelling case study for neighboring Westchester County cities. The municipality’s recently launched Public Safety Camera Program not only represents a significant advancement in community protection but also offers valuable insights for New Rochelle and White Plains as they consider their own safety infrastructure enhancements.
Telangana State, India’s newest state that was carved from the erstwhile state of Andhra
Pradesh in 2014 has launched the Water Grid Scheme named as ‘Mission Bhagiratha (MB)’
to seek a permanent and sustainable solution to the drinking water problem in the state. MB is
designed to provide potable drinking water to every household in their premises through
piped water supply (PWS) by 2018. The vision of the project is to ensure safe and sustainable
piped drinking water supply from surface water sources
2. Central Thesis for “What is Artificial Intelligence?”
“Artificial
Intelligence” is a
term for algorithms
that increase user
productivity
•Quite artificial, no where close
to being “alive”
•Historically we get over-excited
about AI
•Because its an existential threat
•Accelerated in our minds by our
fixation on social media fueled
over-marketed narratives
•Can be hard to define
•Goal posts tend to move
•Much like Data Science
3. History and Definitions
Of Artificial Intelligence
One of the best books written on the subject of AI (if not the best) is
Stuart Russell and Peter Norvig’s Artificial Intelligence: A Modern Approach.
I can’t recommend this book enough for you to get a more complete idea of the depth and history of AI.
4. The Study of Intelligence
•Study of intelligence formally initiated in 1956 at Dartmouth
•Yet is at least 2,000 years old.
•The field is based on understanding intelligent entities and studying
topics such as these:
•Seeing
•Learning
•Remembering
•Reasoning
•These topics are components of what we’d consider intelligent
function
•to the capacity we have to understand intelligence
5. Building Blocks of Intelligent Study
•Philosophy (400 BC)
•Philosophers began to suggest the
mind as a mechanical machine
that encodes knowledge in some
form inside the brain.
•Mathematics
•Mathematicians developed the
core ideas of working with
statements of logic along with the
groundwork for reasoning about
algorithms.
•Psychology
•This field of study is built on the
ideas that animals and humans
have a brain that can process
information.
•Computer science
•Practitioners came up with
hardware, data structures, and
algorithms to support reverse
engineering basic components of
the brain.
6. Defn. “AI”
•Methods considered AI
•Linear modeling
•Neural Networks
•Random Forrests
•Expert Systems
•Rule Bases
Algorithms that automate parts, or all of, tasks
Russell and Norvig’s book on AI:
The intellectual establishment, by and large, preferred to believe that a
“machine can never do X.”
Problem is, AI researchers have
systematically responded by
demonstrating one X after
another.
7. Why is AI Popular Now? Deep Learning
•Three major contributors are
driving interest in AI today:
•The big jump in computer-vision
technology in the late 2000s
(Hinton’s team, others)
•The big data wave of the early
2010s
•Advancements in applications of
deep learning by top technology
firms
8. Quick History of Neural Networks
•Rough approximation of biological neuron
•1950s saw perceptron developed in
hardware
•Changes in activation function allowed for
non-linear functions
•Multi-layer perceptron becomes more
modern version of “neural networks”
12. Practical Use Cases of AI Today
•Computer Vision Applications
•Is there damage on this asset?
(Insurance)
•Sensor Applications
•Which machine on this assembly
line needs maintenance?
•(Automotive, manufacturing)
•Control and Planning
•Robots that navigate a warehouse
(logistics, retail)
•Machines today are getting
better at
•Making sense of images from
cameras
•Working w sensor data
•Making a plan to interact with the
world based on vision and sensors
13. •Developed Deep Reinforcement
Learning techniques to play
different types of games
•Atari
•Go
•Developed system that beat the
world’s best Go player
•AlphaGo beat Lee Sedol the world
champion in a five game
tournament
“Alpha Go Zero.”
Later on deepmind developed a
new variant that was able to
defeat Alpha Go at its own game
only 40 days later
Deepmind
14. IBM Watson Cancer
Reality check:
• uses expert system (programmed by
Memorial Sloan Kettering) to
recommend treatments
• uses NLP to discover and summarize
relevant studies, research, etc., to
back up recommendation
• does NOT automatically learn from
patient record data
• apparently uninterpretable despite
being an expert system!
Pros
• high concordance with common practice --
at hospitals similar to MSK
• can make medical decision making more
efficient
• can improve patient outcomes in regions
with limited resources, no experts
Cons
• recommendations are biased toward MSK
doctors, patient population
• cannot take, e.g., local regulations into
account (maybe recommended treatment not
covered by insurance)
• human-in-the-loop training is slow, labor
intensive; makes adaptation to local
population difficult
15. Why Does the term AI
attract so much Attention?
•Existential
•Will it take my job?
•Will robots take over the world?
•More subtly: “can it answer all of my
questions?”
•So often people expect technology to “give
them the answers”
•Being human is about the intuition to know
which questions to ask
•And leveraging the right technology to answer
these questions
17. Irrational Fear in a Non-Linear World
•When we start out with existential threats on our way of life
•Coupled with irrational exuberance in marketing narratives
•We arrive at some crazy end games, which tend to not be realistic as its hard
to project outcomes in the non-linear world (that we live in)
•Ever notice that in horror movies the narrative always requires the
characters to make irrational choices to move forward?
•“how about we *not* land on that planet?”
•Society tends to self adjust in emergent fashion
•Lots of agents make small local decisions to adapt to changing environments
•Tends to make global changes in the system in ways that are hard to predict
18. Also: Sort of hard to regulate games of linear
algebra and methods of optimization
Sorry, Elon.
20. Modern AI Marketing Narratives
•Narratives such as
•Requires a phd to do anything related to
“AI” (“expert gating”)
•Only top 5 tech shops can hire anyone
good at AI, NFL-level salaries
•Techniques are impossible to fathom,
basically
•A lot of hype
21. The Hype Cycle
•Previous hype cycles:
•cloud, smart grid, big data
•Why is AI different than most hype cycles?
•Existential threat
•Companies tack on the big terms of the cycle to get
marketing and funding
•Most have no real claim to the term they tack on
•“AI spreadsheets”, “AI calendars”, “AI for HR”, etc
•What do they really mean?
•Most of the time: “we use linear modeling in our product in some
tangential way”
22. All of This Has Happened Before
Periods of interest have been the result of the sector being unrealisti‐
cally overhyped followed by a cycle of predictably underwhelming
results.
AI Winter I: (1974–1980). The lead-up to the first AI winter saw
machine translation fail to live up to the hype. Connectionism (neural
networks) interest waned in the 1970s, and speech understanding
research overpromised and underdelivered.
AI Winter II: Late 1980s. In the late 1980s and early 1990s, there was
overpromotion of technologies such as expert systems and LISP
machines, both of which failed to live up to expectations. The
Strategic Computing Initiative canceled new spending at the end of
this cycle. The fifth-generation computer also failed to meet its goals.
23. AI, Experts, and Commodity
•Hadoop, 2009: only a few experts at the top shops can do this
(Google, Yahoo, FB)
•2017: Hadoop becomes commodotized
•Although the narrative is that AI-experts are compensated like NFL
stars today, and that only the big-tech co’s can get them…
•Reality is that it will get commoditized through the combined forces of
tooling, open source, integrated AI-apps
•Along a long enough timeline, all technology converges on “Big 6 Consultant”
24. Post Trough AI-Technologies
We’ve seen this with the following:
•Informatics
•Machine learning
•Knowledge-based systems
•Business rules management
•Cognitive systems
•Intelligent systems
•Computational intelligence
The name change might be partly because they consider their field to be
fundamentally different from AI.
25. The Red Queen Always Wins
•We Survived the spreadsheet
from the 90’s
•Radiologists will survive AI
today
Automation will shift and evolve
the workforce as it has for the
past 1000 years
“My dear, here we must run as fast as we can, just to stay in
place. And if you wish to go anywhere you must run twice as fast
as that.”
27. A Style Guide for Writing About AI
•Don’t talk about “AI” as if it is a noun
•Its not
•Don’t take anything Elon Musk says about AI too seriously
•He’s great at cars and rockets, but for other things --- Its marketing,
Editor's Notes
#6: The study and application of AI techniques we see today are based on these fundamentals. We typically see the study of AI broken into a focus on either behaving or thinking in simulated intelligent systems.
#8: In 2006, Geoff Hinton and his team at the University of Toronto published a key paper on Deep Belief Networks (DBNs).2 This provided the industry with a spark of creativity on what could possibly improve the state of the art. We’ve seen a tsunami of deep learning publications at top journals over the succeeding decade.
#14: Go was previously too hard for the same techniques that solved checkers and chess (A*-variants)
#23: This down period is referred to as an “AI winter” and involves cuts in academic research funding, reduced venture capital interest, and stigma in the marketing realm around anything connected to the term “artificial intelligence.”
#25: 2009: only a few experts at the top shops can do this (Google, Yahoo, FB)
We did it at TVA
Cloudera comes along, begins the commodotization process
Top end people in distributed systems see this
Ex: Joe Hellerstein co-founds Trifacta
2017: Hadoop becomes commodotized
Not nearly as exciting as it used to be
Table stakes in most shops