What is Computing?

By Mr. Archis Gore, student, Masters in Scientific Computing at the Inter-disciplinary School of Scientific Computing, University of Pune.

Computers are everywhere these days, inside our cellular phones, cars, wrist-watches and even microwave ovens. Almost everyone has a PC at home or will have one within the next two years. We learn to use computers the way we learnt to walk. They are no longer a luxury, but a fact of life. Just a couple of years ago, I had to ask friends whether they had a computer at home before beginning a conversation with them. Now, however, I don't know anyone (including my mailman, vegetable vendor and housemaid) who doesn't have a PC at home. We use words like mobile computing as if we were born with their knowledge. We use computers in practically everything we do (so much so that I cannot even solve math problems on paper anymore; but I write programs for them and use the programs to automate my work). The era of repetition is gone; the age of automation has finally arrived.

When we generally think of a computer, we generally think of a screen, keyboard and CPU. We think of it as a machine that can edit documents, save files, store images and music, play games, and do business automation (banking, finance, etc.). It is not uncommon to come across advanced computer users, well experienced in many programming languages, the most popular one being C. And we hear the great new developments in computer technology with phrases like “three-tier business applications”, “SOAP-based XML web-services”, “GUI designing”, etc. And these phrases are quite impressive. They represent technological evolutions over the years.

It is easy to be carried away by the words Information Technology and Computer Science and getting very wrong ideas about what they are. This is quite natural, since the field is

This phenomenon is generally due to the fact that other sciences have developed over the years for more than 3 centuries (some sciences like mathematical sciences have a history of millennia behind it). This makes us much more aware of what they represent, contain and study. For instance, nobody would easily believe a physicist if he/she were to state that they accidentally invented a method of time travel in their backyard. This is because you generally have an idea of what physics is all about. However, if a computer scientist were to say to you, “I can solve your problem for travelling salesmen in polynomial time”, you would not question the person too much. In fact, I doubt whether you would question him at all.

The other problem is that in the computer field, people don't know the difference between being an expert computer user and being a computer scientist. They generally mistake one for the other. You may have seen many people calling expert programmers as computer scientists without knowing the distinction. And the people are not to blame. Even the educational institutions that are ideally responsible for upholding the science have fallen prone to the publicity and fame.

As an example, go out and locate a “networking expert” of your choice. He will generally be a person who knows all the options in all the files in the /etc directory. He will know how to start/restart system services and know a lot of console commands. He will know how to plug in wires, network cards and put in frequent options in the configuration files. He will be an expert at writing shell scripts and will remember their most minute details. Simply ask him how one might design a protocol for a primary router of a company that connects to other unknown routers that are providing routes that cannot be trusted. How the router might use knowledge maintenance systems, expert systems, and artificial intelligence to use the perceived information (the information provided by neighbouring routers) to maintain consistency of its own internal routing tables. And I assure you, in most cases, this expert will reply saying that all these things are not important and what's important is being able to write configuration files and shell scripts.

This is the difference between an expert computer user and a computing expert. The computing expert would not care for any commands or physical options for a specific OS. Rather, he would focus on what those options mean , why they are important and how they are relevant to the current problem at hand. He would be least interested in knowing how you write those options, where you store them or what cool hacker-terminology you use to implement them.

Have you ever wondered how simply programming a computer can make it a science? Have you ever wondered why the word “computer science” even exists? Is it only because experts in using word processors or database packages exist? Don't you think that there must be some science behind it? I shall now come to the point of this article: to explain what computing and computer science is. To identify what the science behind computers is, if there is any. And I shall also take a tangent to fields of information science, information theory and information technology from an academic point of view to clarify what they mean. The words IT and computer science are used interchangeably so often, that it really irritates us core-computing guys. This does not mean that we're not information theorists. But we like to be interpreted in context wherever applicable. For instance, if I develop an algorithm to play chess better than the Deep Blue, I'd like to be portrayed as a computer scientist. If I write a program to transfer this information in real-time to many more such chess-playing programs all over the world so that they can all learn from each other, I'd like to be portrayed as an information theorist.

Let us now go back to the computers taking over the world theme borrowed from popular movie franchises. Is it all really possible? Can it be done? Or is it just pure techno-babble?

Well, the answer is reassuringly positive: yes those things are possible; and not only theoretically but also experimentally. We're just taking the first steps towards building “intelligent” machines but we are progressing with a sufficient speed. As an example, my own third-year undergraduate project was in trying to analyse brainwaves directly and control the position of the mouse cursor on screen by simply “thinking about it”. In fact, experiments with think-typing and think-control are showing good results. There are fascinating results derived from these experiments. For instance, did you know that if you were to record the brain waves of, say, a 100 smart people and were to imitate your own brain waves to conform to their “signature”, then you could actually increase your own intelligence? This is called biofeedback and is actively being experimented on. In fact, biofeedback products may become commodities very soon. Imagine being able to “borrow” traits like mathematical ability, intelligence, or linguistic ability by borrowing the brainwave signature of another person.

Computer science solves problems of computation (duh!). Don't expect a computer scientist to tell you how the universe works. This is a physical problem. However, if you tell a computer scientist certain rules about physics and ask him to simulate the entire universe, he would be the right person for the job. In essence, CS does not care about what is required . You have to tell CS what you expect. You have to define this “what you expect” very precisely. And CS will tell you how to get it most efficiently. For instance, if you were to ask CS what the area of a rectangle is, given the length of each side is more than a million digits long, CS would be so stupid that it would not know what you mean by the “area of a rectangle”. However if you tell CS that the area of a rectangle is the product of the lengths of the two sides, then CS would be smart enough to give you a method to find the product of numbers a million digits long each. And this method would be orders of magnitude faster than the conventional method we're used to using a paper and pencil. In fact, to elaborate, the conventional method would take around 8 hours on your conventional PC to compute the product whereas the FFT method would take less than a couple of seconds.

Our brain, as I explained above, is a substantially different machine compared to the PC. Therefore, the algorithms that are fast on a PC may be very dumb for a human. And CS knows this very well. In fact, this is why CS is a permanent science that can never go away. As we begin coming up with different types of computers, we will need CS to tell us exactly how to use them effectively for our specific needs.

But how does CS know what algorithms will work fast on what machine? And more importantly, can't we just run the algorithm on the machine and test its speed? Why the hell do we need a special computing guy in the loop? The answer is in the fact that we don't build machines first and test algorithms later. If we did that, then we would be no different from the classical alchemists of the 16 th century who were idiotically trying various stupid methods of turning lead into gold without any reason behind the method itself. CS on the other hand actively helps in the design of these machines. For a given method and algorithm, CS can tell you what machine you would have to build to support this method most effectively.

When I mentioned various devices that contain computers in the beginning, I failed to mention that these computers are vastly different in their design and operation based on their purpose. Our PC's are known as general-purpose computers, i.e., they are computing machines that support a few general operations upon which other operations can be built. This allows our PCs to display impressive graphics, play sound, and compute simulations of nuclear devices. However, this one-for-all solution is also a slow-for-all solution. Your general PC is substantially slower than what could be ideally obtained for composing sound, rendering graphics or simulating atomic structures.

Each computing machine is abstracted in terms of mathematical models for the purposes of analysis. Our general PCs are abstracted into Turing Machines (named after Alan Turing, the mathematician who provided the first model for a digital computer). It is by the use of this Turing machine and its mathematical properties that we are able to speak of the effectiveness and correctness of our algorithms even without actually executing them. Our brain is another such machine that is being approximated into “Neural Networks”. ANNs (Artificial Neural Networks) are a simulation of rough approximations of biological neurons on your standard computer to reproduce a machine similar to the brain. Neural Networks have remarkable properties at identifying patterns, rejecting unwanted data, cognition, learning, and memorization. They provide us with very small brains on our own PC. But don't begin hoping to see computers that can think on your desktops too soon. You're average brain has around 3 billion neurons interconnected in a complex structure that gives your “neural network” immense power. To produce and simulate a neural network with such complexity would require a conventional computer that should be around 10^5 times as powerful as the world's fastest supercomputer. But with new electronics designs, neurons are being developed “on-chip”, and the dream of a small thinking machine may soon come true (from a scientific point of view, ‘soon' should be interpreted as 50-100 years).

Ironically enough, as if to remind us constantly that unity is the best policy, nature has chosen to provide power to the neural network in the “connections” between the neurons and not in the neurons themselves. Even in artificial neural networks, the real problems are associated with determining the connections and their architecture. Conventional Turing Machines, or their parallel versions (most supercomputers), have the power in the processing chips themselves and computing does not take place by signalling each other. Only data moves between the nodes while processing takes place on the processors. However, in a neural network, as in your brain, the processing takes place in the connections. When data moves from one neuron to another, it is either amplified or damped which is the peculiar method “processing” of the network. However, this peculiar and simplistic method of processing is immensely powerful in practice.

Before we leave the topic of machines, I would like to comment on some exciting conjectures on these Turing-machines. A Turing-machine is a machine that can compute any problem. Moreover, a Turing machine can simulate practically any other type of machine that we can imagine. This is why neural networks or genetic computers can be successfully simulated on a Turing machine. But the opposite may not always be true. Please understand that this proof does not comment upon the efficiency or speed with which the Turing machine solves the problem using this simulated machine. There is a special type of Turing machine called a Universal Turing Machine on which any Turing machine can be simulated.

This means that any reasonable machine that is designed can be simulated on a Turing machine. Our everyday computers are Turing machines in a sense. It has been proved that whatever can be computed on a Turing machine can also be computed on our PC and vice versa. This means that our computers are Universal Turing Machines. This is why we can simulate all these genetic computers and neural networks on our PCs.

And it is also important to note that our computer programs are Turing machines themselves. Whenever we write a computer program, what we are actually doing is defining a Turing machine to solve our problem. Based on the proof above that a UTM can simulate any TM, we can use our computer to “simulate” our programs which is what we call as “running the program”. And our languages are Turing Machines themselves. It is obvious by now that programming languages are more or less Universal Turing Machines since they allow us to do all operations that our physical machines (the processor) can do. Such languages that are abstractions of a full UTM are called Turing-complete languages. This means that anything that can be done in any of these languages can be done on all of them.

This means that our programming languages are conceptually capable of solving any problem that can be formulated. It is notable, on the other hand, that there do exist non-Turing-complete languages like the very popular database query language SQL.

What's even more exciting is that some people have even postulated that the universe itself is Turing-complete. It means that there can exist a Turing Machine to simulate our entire universe. As I mentioned above, there is no comment on the amount of time, speed or efficiency of such a TM. However, it is quite possible that such a TM does exist although it is yet to be proved. Such a proof, if ever obtained, would imply that the UTM is conceptually the most powerful computer that can be built since it would be able to simulate practically anything in the universe. I must make it clear that scientists have imagined some machines more powerful than a Turing machine, by taking into account certain (as of yet) unreasonable assumptions.

Hence, computing is a science that is all about machines. Since a program represents an algorithm, and since a program is a TM, this implies transitively that the design of an algorithm or a method is equivalent to designing a Turing Machine.

I would very much have liked to take a rigorous look at programming and algorithms at this stage but the article seems to be too long already. I shall continue this thread in the next article I write. Programming is simply a method of providing this algorithm that CS gives us, to a computing machine. For instance, the action of instructing a PC exactly how FFT works is programming. The method of FFT is computer science. The interpretation of the product as an area is Mathematics. And the actual use of this number to construct rectangular blocks of steel to support a building is physics. However, the specific chemical composition that makes this block support such a high building is chemistry. So as you can see, there can be no isolated existence or purpose for a specific field of scientific study. They are symbiotic with each other and must be treated that way.

It is quite remarkable of how much the understanding of pure computing can help us realise the universe around us. The conjectures of the universe being Turing-complete, our PCs being the most powerful design of a computer possible, and the amount of importance computing plays in our daily lives makes one think. It really puts things in perspective for us and makes us understand what “computing” is all about.