Being safe on the internet (was Re: Here we go again - ISP DPI, but is it interception?)
igb at batten.eu.org
Sat Aug 7 10:45:12 BST 2010
"Now I want to argue that worse-is-better is better. C is a
programming language designed for writing Unix, and it was designed
using the New Jersey approach. C is therefore a language for which it
is easy to write a decent compiler, and it requires the programmer to
write text that is easy for the compiler to interpret. Some have
called C a fancy assembly language. Both early Unix and C compilers
had simple structures, are easy to port, require few machine resources
to run, and provide about 50%-80% of what you want from an operating
system and programming language.
Half the computers that exist at any point are worse than median
(smaller or slower). Unix and C work fine on them. The worse-is-better
philosophy means that implementation simplicity has highest priority,
which means Unix and C are easy to port on such machines. Therefore,
one expects that if the 50% functionality Unix and C support is
satisfactory, they will start to appear everywhere. And they have,
Unix and C are the ultimate computer viruses.
A further benefit of the worse-is-better philosophy is that the
programmer is conditioned to sacrifice some safety, convenience, and
hassle to get good performance and modest resource use. Programs
written using the New Jersey approach will work well both in small
machines and large ones, and the code will be portable because it is
written on top of a virus.
It is important to remember that the initial virus has to be basically
good. If so, the viral spread is assured as long as it is portable.
Once the virus has spread, there will be pressure to improve it,
possibly by increasing its functionality closer to 90%, but users have
already been conditioned to accept worse than the right thing.
Therefore, the worse-is-better software first will gain acceptance,
second will condition its users to expect less, and third will be
improved to a point that is almost the right thing. In concrete terms,
even though Lisp compilers in 1987 were about as good as C compilers,
there are many more compiler experts who want to make C compilers
better than want to make Lisp compilers better.
The good news is that in 1995 we will have a good operating system and
programming language; the bad news is that they will be Unix and C++."
On 7 Aug 2010, at 07:07, Peter Tomlinson wrote:
> Tom Thomson wrote:
>> Roland Perry wrote:
>>> It seems to be worse than that... why are these products so
>>> to vulnerabilities? For example, one that used to occur over and
>>> again was "buffer overflow". Surely there must be programming (or
>>> management) techniques that could eliminate them entirely?
>> There are indeed appropriate techniques, but these techniques
>> involve either or both of using hardware which supports memory
>> management (as implemented by old-fashioned mainframe providers and
>> some old-fashioned mini-computer providers) and programming in
>> languages whose operational semantics requires bound checking and
>> separation of code and data. Systems using the technologies
>> developed in the late 1960s and the 1970s by companies such as
>> Burroughs, ICL, and even CTL could not have suffered from most of
>> the vulnerabilities that we see today.
> The memory stirs, taking me back to 1968 when I designed the very
> simple memory management hardware for the ICL 1904A (and in the
> process fixed an error in the 1906A's MMU). Took the software people
> another 2 years to get George 4 running. So that was old-fashioned,
> was it, Tom? It was state of the art then, in the commercial
> environment that soon after took a wrong turn...
More information about the ukcrypto