The first known tools used to aid arithmetic calculations were: bones, pebbles, and counting boards, and the abacus, known to have been used by Sumerians and Egyptians before 2000 BC. Development of computing tools arrived near the start of the 17th century: the geometric-military compass (by Galileo), logarithms and Napier bones (by Napier), and the slide rule (by Edmund Gunter).
In 1642, the Renaissance saw the invention of the mechanical calculator by Wilhelm Schickard and several decades later Blaise Pascal. Schickard and Pascal were followed by Gottfried Leibniz who spent forty years designing a four-operation mechanical calculator, inventing in the process his Leibniz wheel, but who couldn’t design a fully operational machine.
The 18th century saw the arrival of some interesting improvements, first by Poleni with the first fully functional calculating clock and four-operation machine, but these machines were almost always one of the kind. It was not until the 19th century and the Industrial Revolution that real developments began to occur. It wasn’t until 1902 that the familiar push-button user interface was developed, with the introduction of the Dalton Adding Machine, developed by James L. Dalton in the United States.
The electronic calculators of the mid-1960s were large and heavy desktop machines, and by 1970, a calculator could be made using just a few chips of low power consumption, allowing portable models powered by rechargeable batteries. The first portable calculators appeared in Japan in 1970 and were soon marketed around the world. They were very costly, at two or three weeks’ wages, and so was a luxury item. The high price was due to their construction requiring many mechanical and electronic components which were costly to produce, and production runs that were too small to exploit economies of scale. Through the 1970s the hand-held electronic calculator underwent rapid development.