AI,機器學習和深度學習之間有什么區別?
時間:2018-07-23 來源:Oracle 點擊:
次
|
What's the Difference Between AI, Machine Learning, and Deep Learning?
AI,機器學習和深度學習之間有什么區別?
AI means getting a computer to mimic human behavior in some way.
Machine learning is a subset of AI, and it consists of the techniques that enable computers to figure things out from the data and deliver AI applications.
Deep learning, meanwhile, is a subset of machine learning that enables computers to solve more complex problems.
What Is AI?
Artificial intelligence as an academic discipline was founded in 1956. The goal then, as now, was to get computers to perform tasks regarded as uniquely human: things that required intelligence. Initially, researchers worked on problems like playing checkers and solving logic problems.
If you looked at the output of one of those checkers playing programs you could see some form of “artificial intelligence” behind those moves, particularly when the computer beat you. Early successes caused the first researchers to exhibit almost boundless enthusiasm for the possibilities of AI, matched only by the extent to which they misjudged just how hard some problems were.
Artificial intelligence, then, refers to the output of a computer. The computer is doing something intelligent, so it’s exhibiting intelligence that is artificial.
The term AI doesn’t say anything about how those problems are solved. There are many different techniques including rule-based or expert systems. And one category of techniques started becoming more widely used in the 1980s: machine learning.
![]()
What Is Machine Learning?
The reason that those early researchers found some problems to be much harder is that those problems simply weren't amenable to the early techniques used for AI. Hard-coded algorithms or fixed, rule-based systems just didn’t work very well for things like image recognition or extracting meaning from text.
The solution turned out to be not just mimicking human behavior (AI) but mimicking how humans learn.
Think about how you learned to read. You didn’t sit down and learn spelling and grammar before picking up your first book. You read simple books, graduating to more complex ones over time. You actually learned the rules (and exceptions) of spelling and grammar from your reading. Put another way, you processed a lot of data and learned from it.
That’s exactly the idea with machine learning. Feed an algorithm (as opposed to your brain) a lot of data and let it figure things out. Feed an algorithm a lot of data on financial transactions, tell it which ones are fraudulent, and let it work out what indicates fraud so it can predict fraud in the future. Or feed it information about your customer base and let it figure out how best to segment them. Find out more about machine learning techniques here.
As these algorithms developed, they could tackle many problems. But some things that humans found easy (like speech or handwriting recognition) were still hard for machines. However, if machine learning is about mimicking how humans learn, why not go all the way and try to mimic the human brain? That’s the idea behind neural networks.
The idea of using artificial neurons (neurons, connected by synapses, are the major elements in your brain) had been around for a while. And neural networks simulated in software started being used for certain problems. They showed a lot of promise and could solve some complex problems that other algorithms couldn’t tackle.
But machine learning still got stuck on many things that elementary school children tackled with ease: how many dogs are in this picture or are they really wolves? Walk over there and bring me the ripe banana. What made this character in the book cry so much?
It turned out that the problem was not with the concept of machine learning. Or even with the idea of mimicking the human brain. It was just that simple neural networks with 100s or even 1000s of neurons, connected in a relatively simple manner, just couldn’t duplicate what the human brain could do. It shouldn't be a surprise if you think about it; human brains have around 86 billion neurons and very complex interconnectivity.
|
以上信息來源于IIM信息(iim.net.cn)相關研究報告,版權歸IIM信息所有,如引用請標注來源。
相關文章
- 2025年全球智慧城市與公共服務大模型市場前景展望報告 (IIM信息2025年 4P9Q)
- 2025年全球語言大模型產業全景展望報告(IIM信息2025 I5L5)
- 全球大模型AGI市場研究報告(IIM信息 V8W9X)
- 2025年全球及中國AI大模型訓練集群算力服務器產業洞察研究報告(IIM信息2025年 C9D0E)
- 2025年全球自主大模型研發訓練服務行業技術及市場發展研究報告(IIM信息2025 A1B2)
- 全球及中國自主大模型研發訓練服務投資前景研究報告 (IIM信息2025 K1L2M)
- 全球及中國結合大模型LDM產業發展趨勢研究報告 (IIM信息2025 Q7R8S)
- 2025年全球及中國AGI行業深度分析報告(IIM信息2025 S8K2T9M3R)
- 2025年全球及中國語言大模型市場深度調研報告(IIM信息2025年 3B7R9T2K1)
- 2025年全球音樂大模型市場全景分析與展望研究報告(IIM信息2025 GHE4DQ)
- AI導航智能體產業全景分析與展望研究報告(IIM信息2025 YREWGGHQ3)
- 全球及中國電網AGI產業鏈分析與洞察分析報告(IIM信息2025 YHGET65) 電網AGI
- 全球及中國大模型產業鏈深度研究報告(IIM信息2025 YREWGGHQ3) 大模型
- 全球結合大模型LDM技術發展及市場前景深度分析報告(IIM信息2025 GHE4DQ)
- 多模態大模型產業鏈價值與市場前景全景分析報告(IIM信息2025 YHGET65)
- 全球及中國大模型市場全景分析與展望研究報告(IIM信息2025 YREWGGHQ3)
- 多模態大模型技術發展及市場前景分析報告(IIM信息2025 JGRU5) 多模態大模型 多模態大模型報告
- 人形機器人大模型行業全景分析與展望研究報告(IIM信息2025 KUY35) 人形機器人大模型
- 2025年全球及中國音樂大模型技術發展及市場前景分析報告(IIM信息2025 LGU5) 音樂大模型
- 全球及中國大模型行業全景分析及展望報告(IIM信息2025 YHT65) 大模型 大模型報告
- 通用人工智能體產業發展白皮書(IIM信息2025 UCGF0Q35) 通用人工智能體 AGI 通用人工智能體報告 AGI報告
- 2025年全球及中國語言模型市場發展展望報告(IIM信息2025 BQ35) 語言模型 語言模型報告 研究報告 數據
- 2025年全球及中國自主大模型研發訓練服務市場發展展望報告(IIM信息2025 YYDHFQ205) 自主大模型研發訓練服務 自主大模型研發訓練服務報告
- 2025年全球及中國音樂大模型市場發展前景研究報告(IIM信息2025 G4Q3) 音樂大模型 音樂大模型報告 數據
- 2025年全球及中國生成式人工智能模型市場發展展望報告(IIM信息2025 G4Q3) 生成式人工智能模型 生成式人工智能模型報告
- 2025全球及中國金融大模型市場調查報告(2025年) 金融大模型 金融大模型報告 標桿企業 數據
- 全球及中國音樂大模型行業報告(2025-2030年版) 音樂大模型 音樂大模型報告
- 語言模型產業技術發展與市場前景分析報告(2025年) 語言模型 語言模型報告 數據
- LLM行業技術發展與市場前景分析報告(2025年) LLM LLM報告
- 2025年全球及中國大模型AGI行業深度分析報告 大模型AGI 大模型AGI報告
- AGI行業深度調研:技術突破、商業落地與未來挑戰 AGI AGI報告
- 全球及中國人形機器人大模型行業發展研究 人形機器人大模型 人形機器人大模型報告
- 結合大模型LDM行業發展研究:2025年技術趨勢與產業變革 結合大模型LDM 結合大模型LDM報告
- 人形機器人大模型行業發展研究:2025年的技術突破與產業變革 人形機器人大模型 人形機器人大模型報告
- AGI行業發展研究:2025年的技術突破與產業變革 AGI AGI報告















京公網安備 11010602104466號