History of computing — Tenth Test Post
The earliest known tool for use in computation is the Sumerian abacus, and it was thought to have been invented in Babylon c. 2700–2300 BC. Its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date – preceding Greek methods by 2,000 years.[citation needed]
In c. 1050–771 BC, the south-pointing chariot was invented in ancient China. It was the first known geared mechanism to use a differential gear, which was later used in analog computers. The Chinese also invented a more sophisticated abacus from around the 2nd century BC known as the Chinese abacus.[citation needed]
In the 5th century BC in ancient India, the grammarian Pāṇini formulated the grammar of Sanskrit in 3959 rules known as the Ashtadhyayi which was highly systematized and technical. Panini used metarules, transformations and recursions.[7]
In the 3rd century BC, Archimedes used the mechanical principle of balance (see Archimedes Palimpsest#Mathematical content) to calculate mathematical problems, such as the number of grains of sand in the universe (The sand reckoner), which also required a recursive notation for numbers (e.g., the myriad myriad).
Around 200 BC the development of gears had made it possible to create devices in which the positions of wheels would correspond to positions of astronomical objects. By about 100 AD Hero of Alexandria had described an odometer-like device that could be driven automatically and could effectively count in digital form.[8] But it was not until the 1600s that mechanical devices for digital computation appear to have actually been built.
The Antikythera mechanism is believed to be the earliest known mechanical analog computer.[9] It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC.
Mechanical analog computer devices appeared again a thousand years later in the medieval Islamic world and were developed by Muslim astronomers, such as the mechanical geared astrolabe by Abū Rayhān al-Bīrūnī,[10] and the torquetum by Jabir ibn Aflah.[11] According to Simon Singh, Muslim mathematicians also made important advances in cryptography, such as the development of cryptanalysis and frequency analysisby Alkindus.[12][13] Programmable machines were also invented by Muslim engineers, such as the automatic flute player by the Banū Mūsābrothers,[14] and Al-Jazari‘s humanoid robots[citation needed] and castle clock, which is considered to be the first programmable analog computer.[15]
During the Middle Ages, several European philosophers made attempts to produce analog computer devices. Influenced by the Arabs and Scholasticism, Majorcan philosopher Ramon Llull (1232–1315) devoted a great part of his life to defining and designing several logical machinesthat, by combining simple and undeniable philosophical truths, could produce all possible knowledge. These machines were never actually built, as they were more of a thought experiment to produce new knowledge in systematic ways; although they could make simple logical operations, they still needed a human being for the interpretation of results. Moreover, they lacked a versatile architecture, each machine serving only very concrete purposes. In spite of this, Llull’s work had a strong influence on Gottfried Leibniz (early 18th century), who developed his ideas further, and built several calculating tools using them.
Indeed, when John Napier discovered logarithms for computational purposes in the early 17th century, there followed a period of considerable progress by inventors and scientists in making calculating tools. The apex of this early era of formal computing can be seen in the difference engine and its successor the analytical engine (which was never completely constructed but was designed in detail), both by Charles Babbage. The analytical engine combined concepts from his work and that of others to create a device that if constructed as designed would have possessed many properties of a modern electronic computer. These properties include such features as an internal “scratch memory” equivalent to RAM, multiple forms of output including a bell, a graph-plotter, and simple printer, and a programmable input-output “hard” memory of punch cards which it could modify as well as read. The key advancement which Babbage’s devices possessed beyond those created before his was that each component of the device was independent of the rest of the machine, much like the components of a modern electronic computer. This was a fundamental shift in thought; previous computational devices served only a single purpose, but had to be at best disassembled and reconfigured to solve a new problem. Babbage’s devices could be reprogramed to solve new problems by the entry of new data, and act upon previous calculations within the same series of instructions. Ada Lovelace took this concept one step further, by creating a program for the analytical engine to calculate Bernoulli numbers, a complex calculation requiring a recursive algorithm. This is considered to be the first example of a true computer program, a series of instructions that act upon data not known in full until the program is run.
Several examples of analog computation survived into recent times. A planimeter is a device which does integrals, using distance as the analog quantity. Until the 1980s, HVAC systems used air both as the analog quantity and the controlling element. Unlike modern digital computers, analog computers are not very flexible, and need to be reconfigured (i.e., reprogrammed) manually to switch them from working on one problem to another. Analog computers had an advantage over early digital computers in that they could be used to solve complex problems using behavioral analogues while the earliest attempts at digital computers were quite limited.
None of the early computational devices were really computers in the modern sense, and it took considerable advancement in mathematics and theory before the first modern computers could be designed.
The first recorded idea of using digital electronics for computing was the 1931 paper “The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena” by C. E. Wynn-Williams.[18] From 1934 to 1936, NEC engineer Akira Nakashima published a series of papers introducing switching circuit theory, using digital electronics for Boolean algebraic operations,[19][20][21] influencing Claude Shannon‘s seminal 1938 paper “A Symbolic Analysis of Relay and Switching Circuits“.[22]
The 1937 Atanasoff–Berry computer design was the first digital electronic computer (though not programmable), and the Z3 computer from 1941, by German inventor Konrad Zuse was the first working programmable, fully automatic computing machine.
Alan Turing modelled computation in terms of a one-dimensional storage tape, leading to the idea of the Turing machine and Turing-completeprogramming systems.
During World War II, ballistics computing was done by women, who were hired as “computers.” The term computer remained one that referred to mostly women (now seen as “operator”) until 1945, after which it took on the modern definition of machinery it presently holds.[23]
The ENIAC (Electronic Numerical Integrator And Computer) was the first electronic general-purpose computer, announced to the public in 1946. It was Turing-complete,[citation needed] digital, and capable of being reprogrammed to solve a full range of computing problems. Women implemented the programming for machines like the ENIAC, and men created the hardware.[23]
The Manchester Baby was the first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.[24] The first stored-program transistor computer was the ETL Mark III, developed by Japan’s Electrotechnical Laboratory[25][26][27] from 1954[28] to 1956.[26]
The microprocessor was introduced with the Intel 4004. It began with the “Busicom Project”[29] as Masatoshi Shima‘s three-chip CPU design in 1968,[30][29] before Sharp‘s Tadashi Sasaki conceived of a single-chip CPU design, which he discussed with Busicom and Intel in 1968.[31] The Intel 4004 was then developed as a single-chip microprocessor from 1969 to 1970, led by Intel’s Marcian Hoff and Federico Faggin and Busicom’s Masatoshi Shima.[29] The microprocessor led to the development of microcomputers, and the microcomputer revolution.
The 1980s brought about significant advances with microprocessor that greatly impacted the fields of engineering and other sciences. The Motorola 68000 microprocessor had a processing speed that was far superior to the other microprocessors being used at the time. Because of this, having a newer, faster microprocessor allowed for the newer microcomputers that came along after to be more efficient in the amount of computing they were able to do. This was evident in the 1983 release of the Apple computer Lisa. Lisa was the first personal computer with graphical user interface (GUI) that was sold commercially, she ran on the Motorola 68000, dual floppy disk drives, a 5 MB hard drive and had 1MB of RAM .[32] After successfully launching Lisa, a year later Apple released its first Macintosh computer still running on the Motorola 68000 microprocessor. Another advancement because of microprocessors came from Texas Instruments. Texas Instruments first introduced their TMS9900 processor in June 1976.[33] They then used their microprocessor in their TI 99/4 computer.
Late 1980s and beginning in the early 1990s we see more advances with actual computers to aid with actual computing.[clarification needed] In 1990, Apple released the Macintosh Portable, it was heavy weighing 7.3 kg (16 lb) and extremely expensive. It was not met with great success and was discontinued only two years later. That same year Intel introduced the Touchstone Delta supercomputer, which had 512 microprocessors. This technological advancement was very significant as it was used as a model for some of the fastest multi-processors systems in the world. It was even used as a prototype for Caltech researchers, who used the model for projects like real time processing of satellite images and simulating molecular models for various fields of research.