细数二十世纪最伟大的十大算法
luyued 发布于 2011-01-14 05:41 浏览 N 次译者:July
一、1946 蒙特卡洛方法
[1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.]
1946年,美国拉斯阿莫斯国家实验室的三位科学家John von Neumann,Stan Ulam 和 Nick Metropolis
共同发明,被称为蒙特卡洛方法。
它的具体定义是:
在广场上画一个边长一米的正方形,在正方形内部随意用粉笔画一个不规则的形状,
现在要计算这个不规则图形的面积,怎么计算列?
蒙特卡洛(Monte Carlo)方法告诉我们,均匀的向该正方形内撒N(N 是一个很大的自然数)个黄豆,
随后数数有多少个黄豆在这个不规则几何形状内部,比如说有M个,
那么,这个奇怪形状的面积便近似于M/N,N越大,算出来的值便越精确。
在这里我们要假定豆子都在一个平面上,相互之间没有重叠。
蒙特卡洛方法可用于近似计算圆周率:让计算机每次随机生成两个0到1之间的数,看这两个实数是否在单位
圆内。生成一系列随机点,统计单位圆内的点数与总点数,(圆面积和正方形面积之比为PI:1,PI为圆周率
),
当随机点取得越多(但即使取10的9次方个随机点时,其结果也仅在前4位与圆周率吻合)时,
其结果越接近于圆周率。
二、1947 单纯形法
[1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.]
1947年,兰德公司的,Grorge Dantzig,发明了单纯形方法。
单纯形法,此后成为了线性规划学科的重要基石。
所谓线性规划,简单的说,就是给定一组线性(所有变量都是一次幂)约束条件
(例如a1*x1+b1*x2+c1*x3>0),求一个给定的目标函数的极值。
这么说似乎也太太太抽象了,但在现实中能派上用场的例子可不罕见——比如对于一个公司而言,其能够投
入生产的人力物力有限(“线性约束条件”),而公司的目标是利润最大化(“目标函数取最大值”),看
,线性规划并不抽象吧!
线性规划作为运筹学(operation research)的一部分,成为管理科学领域的一种重要工具。
而Dantzig提出的单纯形法便是求解类似线性规划问题的一个极其有效的方法。
三、1950 Krylov子空间迭代法
[1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.]
1950年:美国国家标准局数值分析研究所的,马格努斯Hestenes,爱德华施蒂费尔和
科尼利厄斯的Lanczos,发明了Krylov子空间迭代法。
Krylov子空间迭代法是用来求解形如Ax=b 的方程,A是一个n*n 的矩阵,当n充分大时,直接计算变得非常
困难,而Krylov方法则巧妙地将其变为Kxi+1=Kxi+b-Axi的迭代形式来求解。
这里的K(来源于作者俄国人Nikolai Krylov姓氏的首字母)是一个构造出来的接近于A的矩阵,
而迭代形式的算法的妙处在于,它将复杂问题化简为阶段性的易于计算的子步骤。
四、1951 矩阵计算的分解方法
[1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.]
1951年,阿尔斯通橡树岭国家实验室的Alston Householder提出,矩阵计算的分解方法。
这个算法证明了任何矩阵都可以分解为三角、对角、正交和其他特殊形式的矩阵,
该算法的意义使得开发灵活的矩阵计算软件包成为可能。
五、1957 优化的Fortran编译器
[1957: John Backus leads a team at IBM in developing the Fortran optimizing compiler.]
1957年:约翰巴库斯领导开发的IBM的团队,创造了Fortran优化编译器。
Fortran,亦译为福传,是由Formula Translation两个字所组合而成,意思是“公式翻译”。
它是世界上第一个被正式采用并流传至今的高级编程语言。
这个语言现在,已经发展到了,Fortran 2008,并为人们所熟知。
六、1959-61 计算矩阵特征值的QR算法
[1959–61: J.G.F. Francis of Ferranti Ltd, London, finds a stable method for computing
eigenvalues, known as the QR algorithm.]
1959-61:伦敦费伦蒂有限公司的J.G.F. Francis,找到了一种稳定的特征值的计算方法,
这就是著名的QR算法。
这也是一个和线性代数有关的算法,学过线性代数的应该记得“矩阵的特征值”,计算特征值是矩阵计算的
最核心内容之一,传统的求解方案涉及到高次方程求根,当问题规模大的时候十分困难。
QR算法把矩阵分解成一个正交矩阵(希望读此文的你,知道什么是正交矩阵。:D。)与一个上三角矩阵的积,
和前面提到的Krylov 方法类似,这又是一个迭代算法,它把复杂的高次方程求根问题化简为阶段性的易于
计算的子步骤,使得用计算机求解大规模矩阵特征值成为可能。
这个算法的作者是来自英国伦敦的J.G.F. Francis。
七、1962 快速排序算法
[1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.]
1962年:伦敦的,托尼埃利奥特兄弟有限公司,霍尔提出了快速排序。
哈哈,恭喜你,终于看到了可能是你第一个比较熟悉的算法~。
快速排序算法作为排序算法中的经典算法,它被应用的影子随处可见。
快速排序算法最早由Tony Hoare爵士设计,它的基本思想是将待排序列分为两半,
左边的一半总是“小的”,右边的一半总是“大的”,这一过程不断递归持续下去,直到整个序列有序。
说起这位Tony Hoare爵士,快速排序算法其实只是他不经意间的小小发现而已,他对于计算机贡献主要包括
形式化方法理论,以及ALGOL60 编程语言的发明等,他也因这些成就获得1980 年图灵奖。
快速排序的平均时间复杂度仅仅为O(Nlog(N)),相比于普通选择排序和冒泡排序等而言,
实在是历史性的创举。
八、1965 快速傅立叶变换
[1965: James Cooley of the IBM T.J. Watson Research Center and John Tukey of Princeton
University and AT&T Bell Laboratories unveil the fast Fourier transform.]
1965年:IBM 华生研究院的James Cooley,和普林斯顿大学的John Tukey,
AT&T贝尔实验室共同推出了快速傅立叶变换。
快速傅立叶算法是离散傅立叶算法(这可是数字信号处理的基石)的一种快速算法,其时间复杂度仅为O
(Nlog(N));比时间效率更为重要的是,快速傅立叶算法非常容易用硬件实现,因此它在电子技术领域得到
极其广泛的应用。
日后,我会在我的经典算法研究系列,着重阐述此算法。
九、1977 整数关系探测算法
[1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer
relation detection algorithm.]
1977年:Helaman Ferguson和 伯明翰大学的Rodney Forcade,提出了Forcade检测算法的整数关系。
整数关系探测是个古老的问题,其历史甚至可以追溯到欧几里德的时代。具体的说:
给定—组实数X1,X2,...,Xn,是否存在不全为零的整数a1,a2,...an,使得:a1 x 1 +a2 x2 + . . . + an x
n =0?
这一年BrighamYoung大学的Helaman Ferguson 和Rodney Forcade解决了这一问题。
该算法应用于“简化量子场论中的Feynman图的计算”。ok,它并不要你懂,了解即可。:D。
十、1987 快速多极算法
[1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole
algorithm.]
1987年:莱斯利的Greengard,和耶鲁大学的Rokhlin发明了快速多极算法。
此快速多极算法用来计算“经由引力或静电力相互作用的N 个粒子运动的精确计算
——例如银河系中的星体,或者蛋白质中的原子间的相互作用”。ok,了解即可。
原文
The Best of the 20th Century : Editors Name Top 10 Algorithms
By Barry A. Cipra
SIAM NEWS MAY 2000
Algos is the Greek word for pain. Algor is Latin, to be cold. Neither is the root for algorithm, which stems instead from al-Khwarizmi, the name of the ninth-century Arab scholar whose book al-jabr wa'l muqabalah developed into today's high school algebra textbooks. Al-Khwarizmi stressed the importance of methodical procedures for solving problems. Were he around today, he'd no doubt be impressed by the advances in his eponymous approach.
Some of the very best algorithms of the computer age are highlighted in the January/February 2000 issue of Computing in Science & Engineering, a joint publication of the American Institute of Physics and the IEEE Computer Society. Guest editors Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory and Francis Sullivan of the Center for Computing Sciences at the Institute of Defense Analyses put together a list they call the "Top Ten Algorithms of the Century".
"We tried to assemble the 10 algorithms with the greatest influence on the development and practice of science and engineering in the 20th century," Dongarra and Sullivan write. As with any top-10 list, their selections - and non-selections - are bound to be controversial, they acknowledge. When it comes to picking the algorithmic best, there seems to be no best algorithm.
Without further ado, here's the CiSE top-10 list, in chronological order. (Dates and names associated with the algorithms should be read as first-order approximations. Most algorithms take shape over time, with many contributors.)
1946 : John von Neumann, Stan Ulam, and Nick Metropolis all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo Method.
The Metropolis algorithm aims to obtain approximate solutions to numerical problems with unmanageably many degrees of freedom and to combinatorial problems of factorial size, by mimicking a random process. Given the digital computer's reputation for deterministic calculation, it's fitting that one of its earliest applications was the generation of random numbers.
1947 : George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.
In terms of widespread application, Dantzig's algorithm is one of the most successful of all time : Linear programming dominates the world of industry, where economic survival depends on the ability to optimize within budgetary and other constraints. (Of course, the "real" problems of industry are often nonlinear ; the use of linear programming is sometimes dictated by the computational budget.) The simplex method is an elegant way of arriving at optimal answers. Although theoretically susceptible to exponential delays, the algorithm in practice is highly efficient - which in itself says something interesting about the nature of computation.
1950 : Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.
These algorithms address the seemingly simple task of solving equations of the form . The catch, of course, is that A is a huge matrix, so that the algebraic answer is not so easy to compute. (Indeed, matrix "division" is not a particularly useful concept.) Iterative methods - such as solving equations of the form with a simpler matrix K that's ideally "close" to A - lead to the study of Krylov subspaces. Named for the Russian mathematician Nikolai Krylov, Krylov subspaces are spanned by powers of a matrix applied to an initial "remainder" vector . Lanczos found a nifty way to generate an orthogonal basis for such a subspace when the matrix is symmetric. Hestenes and Stiefel proposed an even niftier method, known as the conjugate gradient method, for systems that are both symmetric and positive definite. Over the last 50 years, numerous researchers have improved and extended these algorithms. The current suite includes techniques for non-symmetric systems, with acronyms like GMRES and Bi-CGSTAB. (GMRES and Bi-CGSTAB premiered in SIAM Journal of Scientific and Statistical Computing, in 1986 and 1992, respectively.)
1951 : Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations
The ability to factor matrices into triangular, diagonal, orthogonal, and other special forms has turned out to be extremely useful. The decompositional approach has enabled software developers to produce flexible and efficient matrix packages. It also facilitates the analysis of rounding errors, one of the big bugbears of numerical linear algebra. (In 1961, James Wilkinson of the National Physical Laboratory in London published a seminal paper in the Journal of the ACM, titled "Error Analysis of Direct Methods of Matrix Inversion," based on the LU decomposition of a matrix as a product of lower and upper triangular factors.)
1957 : John Backus leads a team at IBM in developing the Fortran optimizing compiler.
The creation of Fortran may rank as the single most important event in the history of computer programming : Finally, scientists (and others) could tell the computer what they wanted it to do, without having to descent into the netherworld of machine code. Although modes by modern compiler standards - Fortran I consisted of a mere 23.500 assembly-language instructions - the early compiler was nonetheless capable of surprisingly sophisticated computations. As Backus himself recalls in a recent history of Fortran I, II and III, published in 1998 in the IEEE Annals of the History of Computing, the compiler "produced code of such efficiency that its output would startle the programmers who studied it."
1959-61 : J.G.F. Francis of Ferranti Ltd., London, finds a stable method for computing eigenvalues, known as the QR algorithm.
Eigenvalues are arguable the most important numbers associated with matrices - and they can be the trickiest to compute. It's relatively easy to transform a square matrix into a matrix that's "almost" upper triangular, meaning one with a single extra set of nonzero entries just below the main diagonal. But chipping away those final nonzeros, without launching an avalanche error, is nontrivial. The QR algorithm is just the ticket. Based on the QR decomposition, which writes A as the product of an orthogonal matrix Q and an upper triangular matrix R, this approach iteratively changes into , with a few bells and whistles for accelerating convergence to upper triangular form. By the mid-1960s, the QR algorithm had turned once-formidable eigenvalue problems into routine calculations.
1962 : Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.
Putting N things in numerical or alphabetical order is mind-numbingly mundane. The intellectual challenge lies in devising ways of doing so quickly. Hoare's algorithm used the age-old recursive strategy of divide and conquer to solve the problem : Pick one elem
- 07-01· 禁教唐诗算术能还幼儿快
- 07-01· 2011年06月17日
- 07-01· 唐诗宋词英译:李商隐 筹
- 07-01· 仿评《唐诗1000首》第186首
- 07-01· 没事干的时候背背唐诗吧
- 07-01· [转载]唐诗中“斜”字该读
- 07-01· 湖南醴陵瓷业转型升级
- 07-01· 奇瑞风云2两厢黑色|2010款
- 07-01· 摩根士丹利华鑫摩根士丹
- 07-01· 摩根士丹利华鑫近期优选
- 07-01· 中金投行部大摩出售中金
- 07-01· 摩根士丹利招聘6月2日【实
- 07-01· 营养防病圣典
- 07-01· 《博伽梵歌原意》之第十
- 07-01· [不错]斑斓圣典---减肥中常
- 07-01· 武乐圣典《太极武当》:武
- 07-01· 铁血英雄-现阶段战功牌兑
- 07-01· 2011年06月10日【原创】南歌
- 07-01· 【淘宝网信息】- 2010年的
- 07-01· 深圳品牌女装有哪些?