The Minimum You Need to Know About Mono and Qt by Roland Hughes - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

 

Chapter 1

Fundamentals

 

1.1  The Confusion You Feel is Deliberate

When OOP (Object Oriented Programming) was just a gleam in the eye of software researchers they needed to find a way to force it down the throats of assembly hacking programmers everywhere. C had come onto the market as a widely accepted language, mainly because it allowed assembly level hackers to exercise nearly complete control over a computer while using a high level language. Most C compilers even supported the asm{} directive which allowed highly skilled assembly programmers to drop assembly code into the middle of their C program. (Once optimization was turned on this was an incredibly bad idea, but, it didn't stop some from trying.) So, if you wanted to force OOP down the throats of programmers, the best way to do it was to create an OOP language named after, and mostly compatible with C.

In truth, the creator of the language didn't create a compiler for it initially. Instead he created Cfront which translated C++ source into C source before compiling the C source. http://weseetips.com/tag/cfront/ Once C++ moved sufficiently away from C, in particular with exception handling, there was no reason to continue working on it. By that time, actual compilers were out on the market.

Because of the low level aspect of C, and by extension C++, we had a lot of portability and hardware issues.   LITTLE_ENDIAN and BIG_ENDIAN issues constantly reared their heads along with “standard data type sizes on each machine. (LITTLE_ENDIAN refers to architectures that address the least significant byte while BIG_ENDIAN refers to architectures that address the most significant byte.) Size issues were introduced by machines themselves. Most desktop computers at the time of C++ were 16­bit DOS machines. 32­bit desktop computers were being introduced, but most midrange and mainframe machines were 64­bits or more.

Let us not forget the final insult: ASCII vs. EBCDIC. Even if we could solve all of the other low level problems, we still had different character encoding schemas being used.

There are many different stories about the creation of Java circulating in the world. Yes, we've already covered the story about the embedded device market, but how did they come up with the Java syntax?   My personal belief that I will take to the grave despite all evidence to the contrary is that Sun had a bunch of people working for them who couldn't learn C++.   In an effort to get some kind of benefit from hiring these people, higher minds at Sun created Java. It contained a bunch of C++ syntax, a bunch of wrapper classes to do those things which confused people, and, most importantly, it left out things like multiple inheritance which really baffled most of Sun's employees.

Of course Sun touted that Java was truly a GOTO­less programming language. Well, they may have nuked the word from the language, but they still implemented its functionality.

Java backers also touted how the language removed multiple inheritance, a massive source of confusion and bugs for many C++ developers. Well, initially it didn't provide any machinery to do this and people started dropping Java as a development language. Once that happened, Java suddenly got Interfaces. A