The reason is though subroutines are available low-level languages still force a lot of complexity on you. You must manage memory yourself, declare variables however trivial, call subroutines with a whole bunch of arguments in case just one of them is needed, etc. And you must be able to pull together separate subroutine libraries to do file input/output, user interaction, data processing and graphics.
Whereas all you really want to do is tell the computer things like 'read this', 'Fourier transform that', and 'Plot this', and have it be smart enough to do the right thing. What you are wishing for is in effect a high-level language, in this case it is called 'English'.
While natural language understanding is still quite a long way off, high-level computer languages are currently proliferating. Examples include Perl, TCL, JAVAscriptm, Visual Basic, Python, and many more. Such systems have also been developed for data processing. Worthy of note are commercial software such as IDL ('Image Data Language' from Research Systems Inc.http://www.rsinc.com), MATLAB (from The Mathworks, Inc. http://www.mathworks.com) and the public domain program Octave http://www.octave.org. These implement special-purpose high-level languages where data is handled in large chunks, via 'vector operations'.
What does this mean in practice? It means if you say:
C=A+B
then the operation is performed even if A and B are large arrays containing many millions of numbers. Further you can say something like:
D=FFT(C)
(to apply a Fast Fourier Transform) and get what you want. No messing about. These data analysis languages also implement nice graphics layers, as well as a large suite of mathematical algorithms.
Having used these systems ourselves the authors of PDL can attest to the superiority of that approach in terms of plain getting things done. We of course believe that PDL is now better than all those systems, for quite a few reasons, and that your life will be easier if you get it and use it.
The case for a free Data Language
The free software community has taken off to an extraordinary extent in the few years. This has been most vivid in the success of the Linux, a free UNIX-like Operating System. Sometimes this movement is also described as 'Open Source' rather than 'free,' and the term 'free' is often used to mean freedom of use rather than freedom from price. Although much of the code is indeed free/public domain money is made out of the sale of packaged distributions, support, books, etc. Nevertheless the software is usually available at minimal cost.
One key point is that the source code is available, so that however the software is obtained one has the ability to take it and in principle be able to change it to do whatever is required with it.
How is this relevant to data languages? The authors of PDL are all scientists. We write, obviously, as scientists but believe our ideas are directly relevant to all users of PDL. The scientific community has for hundreds of years believed in the free exchange of ideas. It has been traditional to publish full details about how research is done openly in journals. This is very close in spirit to the ideas behind the free software. These days much of what scientists do involves software, in fact large software packages to facilitate certain kinds of analysis are often the subject of major papers themselves with the software being freely available on the Internet. Such software is commonly written in C or FORTRAN to allow general use.
Why aren't they working at a higher level? As we explained above this would allow