Skip to main content

algorithm



An algorithm (pronounced AL-go-rith-um) is a procedure or formula for solving a problem. The word derives from the name of the mathematician, Mohammed ibn-Musa al-Khwarizmi, who was part of the royal court in Baghdad and who lived from about 780 to 850. Al-Khwarizmi's work is the likely source for the word algebra as well.
In mathematics and computer science, an algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.
More precisely, an algorithm is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[7]
Though al-Khwārizmī's algorism referred only to the rules of performing arithmetic using Hindu-Arabic numerals, a partial formalization of what would become the modern algorithm began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability"[8] or "effective method";[9] those formalizations included the GödelHerbrandKleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939. Giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem.[10]

I

nformal definition

For a detailed presentation of the various points of view around the definition of "algorithm" see Algorithm characterizations. For examples of simple addition algorithms specified in the detailed manner described in Algorithm characterizations, see Algorithm examples.
While there is no generally accepted formal definition of "algorithm," an informal definition could be "a set of rules that precisely defines a sequence of operations."[11] For some people, a program is only an algorithm if it stops eventually; for others, a program is only an algorithm if it stops before a given number of calculation steps.[12]
A prototypical example of an algorithm is Euclid's algorithm to determine the maximum common divisor of two integers; an example (there are others) is described by the flow chart above and as an example in a later section.
Boolos & Jeffrey (1974, 1999) offer an informal m
eaning of the word in the following quotation:
No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit ...you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human who is capable of carrying out only very elementary operations on symbols.[13]
The term "enumerably infinite" means "countable using integers perhaps extending to infinity." Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be chosen from 0 to infinity. Thus an algorithm can be an algebraic equation such as y = m + n—two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example):
Precise instructions (in language understood by "the computer")[14] for a fast, efficient, "good"[15] process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities)[16] to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively"[17] produce, in a "reasonable" time,[18] output-integer y at a specified place and in a specified format.
The concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related with our customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.

Comments

Popular posts from this blog

Pointers in C

Pointers are an extremely powerful programming tool. They can make some things much easier, help improve your program's efficiency, and even allow you to handle unlimited amounts of data. For example, using pointers is one way to have a function modify a variable passed to it. It is also possible to use pointers to dynamically allocate memory, which means that you can write programs that can handle nearly unlimited amounts of data on the fly--you don't need to know, when you write the program, how much memory you need. Wow, that's kind of cool. Actually, it's very cool, as we'll see in some of the next tutorials. For now, let's just get a basic handle on what pointers are and how you use them. What are pointers? Why should you care? Pointers are aptly name: they "point" to locations in memory. Think of a row of safety deposit boxes of various sizes at a local bank. Each safety deposit box will have a number associated with it so that you ca...

Types of Computer Networks(LAN,MAN,WAN)

Types of Computer Networks Computer networks are categorized according to: How they are organized physically. The way they are used. The distance over which they operate. Three main types of computer networks are as follows: LAN   (Local Area Network) WAN (Wide Area Network) MAN ( Metropolitan-Area Network) LAN (Local Area Network) LAN is the most common type of network . LAN stands for Local area Network . It covers a small area. Most LANs are used to connect computers in a single building or group of buildings. Hundreds or thousands of computer maybe connected through LAN. Typical LANs can be found in industrial plants, office buildings, and college or university campuses. LANs are capable of transmitting data at very fast rate. Data transmission speeds of LAN are 1 to 100 megabits per second. It is much faster than data transmission over a telephone line. LAN can transmit data in a limited distance. There is also a limit on the number of...

Ring topologies

Ring topologies are similar to bus topologies, except they transmit in one direction only from station to station. Typically, a ring architecture will use separate physical ports and wires for transmit and receive. Token Ring is one example of a network technology that uses a ring topology.