Electronic Computing
Electronic Computing: Crash Course Computer Science #2
英文
Our last episode brought us to the start of the 20th century, where early, special purpose computing devices, like tabulating machines, were a huge boon to governments and business - aiding, and sometimes replacing, rote manual tasks. But the scale of human systems continued to increase at an unprecedented rate. The first half of the 20th century saw the world’s population almost double. World War 1 mobilized 70 million people, and World War 2 involved more than 100 million. Global trade and transit networks became interconnected like never before, and the sophistication of our engineering and scientific endeavors reached new heights – we even started to seriously consider visiting other planets.
And it was this explosion of complexity, bureaucracy, and ultimately data, that drove an increasing need for automation and computation. Soon those cabinet-sized electro-mechanical computers grew into room-sized behemoths that were expensive to maintain and prone to errors.
And it was these machines that would set the stage for future innovation.
One of the largest electro-mechanical computers built was the Harvard Mark I, completed in 1944 by IBM for the Allies during World War 2.
It contained 765,000 components, three million connections, and five hundred miles of wire.
To keep its internal mechanics synchronized, it used a 50-foot shaft running right through the machine driven by a five horsepower motor.
One of the earliest uses for this technology was running simulations for the Manhattan Project.
The brains of these huge electro-mechanical beasts were relays: electrically-controlled mechanical switches. In a relay, there is a control wire that determines whether a circuit is opened or closed. The control wire connects to a coil of wire inside the relay. When current flows through the coil, an electromagnetic field is created, which in turn, attracts a metal arm inside the relay, snapping it shut and completing the circuit. You can think of a relay like a water faucet. The control wire is like the faucet handle. Open the faucet, and water flows through the pipe. Close the faucet, and the flow of water stops.
Relays are doing the same thing, just with electrons instead of water. The controlled circuit can then connect to other circuits, or to something like a motor, which might increment a count on a gear, like in Hollerith's tabulating machine we talked about last episode. Unfortunately, the mechanical arm inside of a relay has mass, and therefore can’t move instantly between opened and closed states.
A good relay in the 1940’s might be able to flick back and forth fifty times in a second.
That might seem pretty fast, but it’s not fast enough to be useful at solving large, complex problems. The Harvard Mark I could do 3 additions or subtractions per second; multiplications took 6 seconds, and divisions took 15.
And more complex operations, like a trigonometric function, could take over a minute.
In addition to slow switching speed, another limitation was wear and tear. Anything mechanical that moves will wear over time. Some things break entirely, and other things start getting sticky, slow, and just plain unreliable.
And as the number of relays increases, the probability of a failure increases too. The Harvard Mark I had roughly 3500 relays. Even if you assume a relay has an operational life of 10 years, this would mean you’d have to replace, on average, one faulty relay every day! That’s a big problem when you are in the middle of running some important, multi-day calculation.
And that’s not all engineers had to contend with. These huge, dark, and warm machines also attracted insects. In September 1947, operators on the Harvard Mark II pulled a dead moth from a malfunctioning relay. Grace Hopper who we’ll talk more about in a later episode noted, “From then on, when anything went wrong with a computer, we said it had bugs in it.”
And that’s where we get the term computer bug.
It was clear that a faster, more reliable alternative to electro-mechanical relays was needed if computing was going to advance further, and fortunately that alternative already existed!
In 1904, English physicist John Ambrose Fleming developed a new electrical component called a thermionic valve, which housed two electrodes inside an airtight glass bulb - this was the first vacuum tube. One of the electrodes could be heated, which would cause it to emit electrons – a process called thermionic emission. The other electrode could then attract these electrons to create the flow of our electric faucet, but only if it was positively charged - if it had a negative or neutral charge, the electrons would no longer be attracted across the vacuum so no current would flow.
An electronic component that permits the one-way flow of current is called a diode, but what was really needed was a switch to help turn this flow on and off. Luckily, shortly after, in 1906, American inventor Lee de Forest added a third “control” electrode that sits between the two electrodes in Fleming’s design.
By applying a positive charge to the control electrode, it would permit the flow of electrons as before. But if the control electrode was given a negative charge, it would prevent the flow of electrons. So by manipulating the control wire, one could open or close the circuit. It’s pretty much the same thing as a relay - but importantly vacuum tubes have no moving parts. This meant there was less wear, and more importantly, they could switch thousands of times per second. These triode vacuum tubes would become the basis of radio, long distance telephone, and many other electronic devices for nearly a half century. I should note here that vacuum tubes weren’t perfect - they’re kind of fragile, and can burn out like light bulbs, they were a big improvement over mechanical relays.
Also, initially vacuum tubes were expensive – a radio set often used just one, but a computer might require hundreds or thousands of electrical switches.
But by the 1940s, their cost and reliability had improved to the point where they became feasible for use in computers…. at least by people with deep pockets, like governments. This marked the shift from electro-mechanical computing to electronic computing. Let’s go to the Thought Bubble.
The first large-scale use of vacuum tubes for computing was the Colossus Mk 1 designed by engineer Tommy Flowers and completed in December of 1943. The Colossus was installed at Bletchley Park, in the UK, and helped to decrypt Nazi communications.
This may sound familiar because two years prior Alan Turing, often called the father of computer science, had created an electromechanical device, also at Bletchley Park, called the Bombe. It was an electromechanical machine designed to break Nazi Enigma codes, but the Bombe wasn’t technically a computer, and we’ll get to Alan Turing’s contributions later. Anyway, the first version of Colossus contained 1,600 vacuum tubes, and in total, ten Colossi were built to help with code-breaking.
Colossus is regarded as the first programmable, electronic computer.
Programming was done by plugging hundreds of wires into plugboards, sort of like old school telephone switchboards, in order to set up the computer to perform the right operations.
So while “programmable”, it still had to be configured to perform a specific computation.
Enter the The Electronic Numerical Integrator and Calculator – or ENIAC – completed a few years later in 1946 at the University of Pennsylvania.
Designed by John Mauchly and J. Presper Eckert, this was the world's first truly general purpose, programmable, electronic computer.
ENIAC could perform 5000 ten-digit additions or subtractions per second, many, many times faster than any machine that came before it. It was operational for ten years, and is estimated to have done more arithmetic than the entire human race up to that point.
But with that many vacuum tubes failures were common, and ENIAC was generally only operational for about half a day at a time before breaking down.
Thanks Thought Bubble. By the 1950’s, even vacuum-tube-based computing was reaching its limits.
The US Air Force’s AN/FSQ-7 computer, which was completed in 1955, was part of the “SAGE” air defense computer system we’ll talk more about in a later episode.
To reduce cost and size, as well as improve reliability and speed, a radical new electronic switch would be needed. In 1947, Bell Laboratory scientists John Bardeen, Walter Brattain, and William Shockley invented the transistor, and with it, a whole new era of computing was born!
The physics behind transistors is pretty complex, relying on quantum mechanics, so we’re going to stick to the basics.
A transistor is just like a relay or vacuum tube - it’s a switch that can be opened or closed by applying electrical power via a control wire. Typically, transistors have two electrodes separated by a material that sometimes can conduct electricity, and other times resist it – a semiconductor. In this case, the control wire attaches a “gate” electrode. By changing the electrical charge of the gate, the conductivity of the semiconducting material can be manipulated, allowing current to flow or be stopped – like the water faucet analogy we discussed earlier. Even the very first transistor at Bell Labs showed tremendous promise – it could switch between on and off states 10,000 times per second.
Further, unlike vacuum tubes made of glass and with carefully suspended, fragile components, transistors were solid material known as a solid state component.
Almost immediately, transistors could be made smaller than the smallest possible relays or vacuum tubes.
This led to dramatically smaller and cheaper computers, like the IBM 608, released in 1957 – the first fully transistor-powered, commercially-available computer.
It contained 3000 transistors and could perform 4,500 additions, or roughly 80 multiplications or divisions, every second. IBM soon transitioned all of its computing products to transistors, bringing transistor-based computers into offices, and eventually, homes.
Today, computers use transistors that are smaller than 50 nanometers in size – for reference, a sheet of paper is roughly 100,000 nanometers thick. And they’re not only incredibly small, they’re super fast – they can switch states millions of times per second, and can run for decades.
A lot of this transistor and semiconductor development happened in the Santa Clara Valley, between San Francisco and San Jose, California.
As the most common material used to create semiconductors is silicon, this region soon became known as Silicon Valley. Even William Shockley moved there, founding Shockley Semiconductor, whose employees later founded Fairchild Semiconductors, whose employees later founded Intel - the world’s largest computer chip maker today.
Ok, so we’ve gone from relays to vacuum tubes to transistors. We can turn electricity on and off really, really, really fast. But how do we get from transistors to actually computing something, especially if we don’t have motors and gears?
That’s what we’re going to cover over the next few episodes.
Thanks for watching. See you next week.
中文
上集我们谈到了20世纪初,如制表机这样针对特定用途的设备是政府和企业的福星,它们帮助,甚至代替了人工操作,但人口仍然在以不可预料的速度增长
20世纪上半叶,世界人口几乎翻了一番第一次世界大战动员了7000万人,第二次世界大战有超过1亿人参与,全球贸易和运输网络开始了前所未有的连接
我们的工程和科学事业也变得,前所未有的复杂,我们甚至开始认真地考虑能不能访问其他星球,正是这种复杂性,官僚主义和最终数据的爆炸性增加,导致了人们对自动化和计算能力的需求日益增长,然后这些柜子一般大小的电子机器,变成了需要花费巨资维护的房间一般大的庞然大物,而且容易出错,而正是这些机器将为未来计算机的革新打下基础,最大的电子计算机之一,叫做哈佛马克1号(HarvardMarkI),于1944年在第二次世界大战中由IBM作为同盟国而建造,它有765,000个组件,3百万个连接点和500英里长的导线,为了保持内部机械装置同步,它用了个50英尺长的传动轴,传动轴由一个5马力功率的电机驱动,这项技术其中一个最早的用途,是给曼哈顿计划运行计算机模拟
这些巨大的机电怪兽的核心是,继电器,用电控的机械开关,在继电器中,有根决定电路是否闭合的控制线,控制线连着,继电器里的线圈当电流流过线圈,电磁场就产生了,它吸引继电器内的金属臂,导致电路闭合,你可以把继电器看成是水龙头,控制线,就是水龙头的把手,打开水龙头,水就会流过管道,关闭水龙头,水就,继电器和水龙头是一样的,只是在继电器中电代替了水,然后受控制的电路就可以连到其他电路,或者是连电动机之类的东西它可以使计数齿轮+1,就像上集Hollerith的制表机一样不幸的是,继电器内部的机械臂,“有质量”因此不能快速打开关闭
20世纪40年代,好的继电器可以一秒内开关五十次,看起来很快,但还不至于快到解决一些很大,很复杂的问题
哈佛马克1号(HarvardMarkI)每秒可以做3次加减法,但一次乘法就要花费6秒时间,除法则是15秒,而更复杂的,像三角函数之类的操作,可能要超过一分钟,除了开关速度慢,另一个限制是齿轮会磨损,任何会动的机械都会随着时间而磨损一些齿轮若是坏了,其它的就会变慢,甚至影响工作,并且随着继电器越来越多,故障的概率也越来越大哈佛马克1号(HarvardMarkI)有大约3500个继电器,就算假设一个继电器的使用寿命为10年这也意味着,平均每天也得换掉1个故障的继电器!,这对于一些要运行很多天的重要计算是个很严重的问题,而且这还不算完这些又大又黑,散热还特别厉害的机器,还会吸引昆虫1947年9月,哈佛马克2号(HarvardMarkII)的操作员,从故障继电器中取出了一只死掉的飞蛾GraceHopper(这个人我们之后会讲到)说“从那时起,当电脑出问题了,我们就会说里面有只虫子(bug)"这就是计算机术语"bug"的来源
显然,如果想进一步推进计算能力,我们需要更快更可靠的东西来替代继电器幸运的是,替代品已经存在了!
1904年英国物理学家,约翰·安布罗斯·弗莱明开发了一种全新的电子部件,叫“热电子管”就是在一个密封玻璃灯泡里放了2个电极,这是世上第一个真空管其中一个电极可以被加热,然后发射电子,这个过程称为“热电子发射”另一个电极可以吸引这些电子,来在这个“电子水龙头“中形成电流但只有带正电时才行,如果带负电荷或中性电荷,电子不会被吸引着穿过真空,所以没有电流会流过,任何只允许电流单向流动的电子部件,都叫做二极管但我们还是需要一个开关,来开关电路
很棒,不久之后的1906年,美国发明家李·德富雷斯特参考弗莱明的设计,在两个电极间,加入了第三个“控制”电极,通过向控制电极施加正电荷,使得电子可以流动但如果向控制电极施加负电荷,它将阻止电子流动因此通过操作控制线路,我们就可以打开或闭合电路,这和继电器基本就是一回事但请注意,真空管内没有活动部件,这意味着更少的磨损更重要的是,它们每秒可以开关数千次这些三极管真空管将成为无线电、长途电话,以及近半个世纪的其他电子设备的基础,我要说明的是,真空管也不是十分完美的,它们有点脆弱,并且会像灯泡一样烧坏但它们是对机械继电器的一次重大改进,此外,最初真空管非常昂贵,一个收音机通常只用一个但是计算机可能需要数百或数千个电子开关
但到了20世纪40年代它们的成本和可靠性,已经提高到可以在计算机中使用的程度...至少可以被那些有钱人使用,比如政府这标志着人们开始从电子机械计算机器
转变为电子计算然后是思想泡泡~
第一次大规模使用真空管是设计ColossusMk1时,由工程师TommyFlowers设计,完工于1943年12月,Colossus被安装在英国的布莱切利园用来帮助解密纳粹通信,这听起来很熟悉,因为在两年前经常被称为计算机科学之父的,阿兰·图灵也在布莱切利园创造了台机电装置,叫Bombe这台机器的设计目的是破解纳粹的英格码(Enigma),但是Bombe严格来说不算是台计算机我们之后再讨论阿兰·图灵的贡献,总之呢,第一版的Colossus,有1,600个真空管总共造了十个Colossus来帮助破解密码,Colossus被认为是第一个可编程的电子计算机,编程方法是把几百根电线插到插板里,有点像老电话交换机这样计算机才会执行正确的操作,虽然它“可编程”,但还是得人工设置才能执行特定,电子数值积分计算机"ENIAC"
在1946年在宾夕法尼亚大学完成建造,它由JohnMauchly和J.PresperEckert设计它是世上第一个真正的“通用目的”的,“可编程”的“电子”计算机,ENIAC每秒可执行5000次十位数加减法,比它的前辈快很多很多倍它工作了十年,据估计,它的运算量超过了全人类有史以来的所有运算,但由于真空管故障非常常见ENIAC每次故障前,一般只能运行半天左右
多谢,思想泡泡到了1950年代,甚至基于真空管的计算已经达到极限,美国空军的AN/FSQ-7计算机,1955年制造,是“SAGE”防空计算机系统的一部分这个我们以后会说,为了降低成本和尺寸,提高可靠性和速度我们需要一种,全新的电子开关1947年,贝尔实验室科学家JohnBardeen,WalterBrattain,和WilliamShockley发明了晶体管就是它,一个全新的计算时代诞生了!
晶体管背后的物理学相当复杂,依赖于量子力学,所以我们只讲基础
晶体管就像继电器或者真空管它是一个开关,可以通过向控制线施加电源打开或关闭通常来说,晶体管有导电材料隔开的两个电极,这些材料有时导电有时却不导电,这就是半导体,控制线连到一个“门”电极通过改变“门”的电荷,我们可以控制半导体的导电性允许电流流动或停止,就像前面提到的水龙头比喻贝尔实验室早期的第一个晶体管,就展示了巨大的潜力它每秒可以打开关闭10,000次,而且,和玻璃制成而且易碎的真空管不同,晶体管是固态组件,晶体管几乎可以制造得比世上最小的继电器和真空管还要小,这便促生更小更便宜的计算机例如1957年发行的IBM608,这是第一个完全晶体管供电,可以从市面上买到的计算机,它有3000个晶体管,每秒可执行4500次加法,或大约每秒80次乘除法IBM很快改变了所有的计算机产品,都用晶体管来做将基于晶体管的计算机带入办公室,最终带入家庭使用
如今,计算机里晶体管的尺寸小于50纳米,对比起来,一张纸的厚度约为100,000纳米晶体管不仅小,还超级快,它们可以每秒切换状态数百万次,并且可以运行几十年,晶体管和半导体的研发,大多都发生在圣克拉拉谷(SantaClaraValley),它在旧金山和加利福尼亚的圣荷西之间,因为用于产生半导体的最常见的材料是“硅”,于是这个地区很快就被称为“硅谷”甚至WilliamShockley都搬到了那里,创立了肖克利半导体其员工后来成立了,飞兆半导体其员工后来创立了,英特尔当今世上最大的计算机芯片制造商,我们从继电器到真空管,再到晶体管,我们可以让电路开关得非常,非常,非常快但我们又是怎么用晶体管来进行实际计算呢?我们可没有电机和齿轮可用,这是我们接下来几集的内容
感谢观看,下周见。