Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

C# Under The Microscope 389

For anyone not following the story thus far, C# is here, courtesy of Microsoft, Anders Hejlsberg, and a raftload of other languages from which its features are derived (or added to). Napster's Nir Arbel here dissects what C# means to programmers used to other languages, and explains a bit about where it fits into the grand scheme of things. Be warned: Nir takes a pragmatic, low-dogma approach which may be unsettling to some readers. Please watch your head.

To Begin at the Ending

I'm a big fan of programming languages, possibly more than of actual programming. Every once in a while I hear about this new language that is just "brilliant", that "does things differently" and that "takes a whole different approach to programming". I typically then take the necessary time off my regularly scheduled C++ programming, learn enough about the language to get excited about the new one, but not enough to actually do anything useful with it, rave about it for a couple days, and then quietly and without protest go back to my C++ programming.

And so, when I learned of Microsoft's new up-and-comer, C# (pronunciation: whatever), I became duly excited and went forth to learn as much about it as possible.

Last things first: On paper, C# is very interesting. It does very little that's truly new and innovative, but it does do several things differently, and through this paper I hope to explore and present at least some the more important differences between C# and its obvious influences: C++ and Java. So, skipping the obligatory Slashdot "speaking favorably of Microsoft" apology, let's talk about C#, the language.

How is it like Java/C++?

In the look & feel department, C# feels very much like C++. More so than even Java. While Java syntax borrows much of the C++ syntax, some of the corresponding language constructs have a slightly different form of use. While this is hardly a complaint, it's interesting to note that the designers of C# went a little further in making it look like C++. This is good for the same reason it was good with Java. Being a professional C++ programmer, I use C++ way more than any other language. Eiffel, for instance, has a much cleaner syntax than either C++, C# or Java, and at face value it does seem as though one should bear with new syntax if this is going to lead to cleaner, more easily understandable code, but for an old dog like myself, not having to remember so much new syntax when switching to another language is nothing short of a blessing.

C# borrows much from Java, a debt which Microsoft has not acknowledged, and possibly never will. Just like Java, C# does automatic garbage-collection. This means that, unlike with C and C++, there is no need to track the use of created objects, since the program automatically knows when objects are no longer in use and eventually destroys them. This makes working with large object groups considerably simpler, although, there have been a few instances where I was faced with a programming problem where the solution depended on objects *not* being automatically destroyed, as they were supposed to exist separate from the main object hierarchy and would take care of their own destruction when the time was right. Stroustrup's vision of automatic garbage-collection for C++ sees automatic garbage-collection as an optional feature, which might make the language more complicated to use, but would allow better performance and increased design flexibility.

One interesting way in which C# deals with the performance issues involved with automatic garbage collection is that of allowing you to define classes whose objects always copy by value, instead of the default copy by reference, which means there is no need to garbage- collect such objects. This is done, confusingly enough, by defining classes instead as structs. This is very different from C++ structs, which are defined in exactly the same way; C++ structs are just classes where members are public by default, instead of privately. Another idea that was lifted directly off Java, and one which turned out to be very controversial is that of multiple inheritance. In what seemed like a step backwards, Java did not allow you to define classes that inherit from one than one class. Java did let you define "interfaces", which work like C++ abstract classes, but were semantically clearer: an interface is a functional contract that declares one or more methods. A class can choose to "sign" such a contract by inheriting it, and providing a working implementation for every method that the interface declares. In Java, you can inherit as many interfaces as you want. The rationale to all this being that multiply inheriting more than one class raises too many possible problems, most notably that of clashing implementations and repeated inheritance. On a side note, the cleanest separation between interface and implementation that I know of is that of Sather, where classes can provide either implementation or interface, but not both.

So what else is new?

One new feature that I mentioned already was that of copy-by-value objects. This seemingly small improvement is a potentially huge performance saver! With C++, one is regularly tempted to describe the simplest constructs as classes, and in so doing make it safer and simpler to use them. For example, a phone directory program might define a phone record as a class, and would maintain one PhoneRecord object per actual record. In Java, each and every one of those objects would be garbage collected! Now, Java uses mark-and-sweep in order to garbage collect. The way that this is done is this: the JVM starts with the program's main object, and starts recursively descending through references to other objects. Every object that is traversed is marked as referenced. When this is done, all of the objects that aren't marked are destroyed. In the phone book program, especially if there are thousands and thousands of phone records, this can drastically increase the time that it takes the JVM to go through the marking phase. In C#, you'd be able to avoid all this by defining PhoneRecord as a struct instead of a class.

Another thing that C# does better than Java is the type-unification system. In Java, all classes are implicitly descendents of the Object class, which supplies several extremely useful services. C# classes are also all eventual descendents of the object class, but unlike Java, primitives such as integers, booleans and floating-point types are considered to be regular classes. Java supplies classes that correspond with primitive types, and mapping an object-value to a primitive value and vice versa is very simple, but C# makes it that much simpler by eliminating that duplicity.

Personally, I found C# support of events to be a very exciting new feature! Whereas an object method operates the object in a certain way, object events let the object notify the outside world of particular changes in its state.. A Socket class, for instance, might define a ReadPossible event or a data object might release a DataChanged event. Other objects may then subscribe for such an event so that they'd be able to do some work when the event is released. Events may very well be considered to be "reverse- functions", in the sense that rather than operate the object, they allow the object to operate the outside world, and in my programming experience, events are almost as important as methods themselves.

While you could always implement events in C by taking pointers to functions, or optionally in C++ and Java by taking objects that subclass a corresponding handler type, C# allows you to define class events as regular members. Such event members can be defined to take any delegate type. Delegates are the C# version of function pointers. Whereas a C function pointer consists of nothing but a callable address, a delegate is an object reference as well as a method reference. Delegates are callable, and when called, operate the stored method upon the stored object reference. This design, which may seem less object-oriented than the Java approach of defining a handler interface and having subscribers subclass the interface and instantiate a subscriber, is considerably more straightforward and makes using events nearly as simple as invoking object methods.

Events are one example of how C# takes a popular use of pre-existing object-oriented mechanisms and makes it explicit by giving it a name and logic of its own. Properties are another example, even though they're not as much of a labor-saver as events are. It is very commonplace in C++ to provide "getters" and "setters" for private data members, in order to provide controlled access to them. C# treats such "protected" data members as Properties, and the declaration syntax of properties is such that you have to provide getter and setter functions for each property. In fact, properties do not have to correspond to real data members at all! They may very well be the product of some calculation or other operation.

And then, by far the ugliest, most redundant and hard-to-understand language construct in C# is the Attribute. Attributes are objects of certain types that can be attached to any variable or static language construct. At run-time, practically anything can be queried for the value of attributes attached to it. This sounds like the sort of hack someone would work into a language ten years after it's been in use and there was no other way to do something important without breaking backwards compatibility. Attributes are C#'s version of Java reflection, but with none of the elegance and appropriateness. In general, and especially in light of C#'s overall design, the Attributes feature is out of place, and inexcusable.

What is it missing?

Being an unborn language, there is much that C# does not yet promise to deliver, and for which it can't be criticized. First of all, there is no telling just how well it would perform. Java is, in many ways, the better language but one of the prime reasons it's been avoided is its relatively slow performance, especially compared to corresponding C and C++ implementations. It's not yet clear whether C# programs would need the equivalent of a Java Virtual Machine or whether they could be compiled directly into standalone executables, which might positively affect C#'s performance and possibly even set it as a viable successor to C++, at the very least on Windows. While there is much talk of C# being cross-platform, it is unclear just how feasible implementing C# on non- windows platforms is going to be. The required .NET framework consists of much that is, at least at the moment, Windows specific, and C# relies heavily on Microsoft's Component Object Model. All things considered, setting up a proper environment for C# on other platforms should prove to be a massive undertaking, that perhaps none other than Microsoft can afford.

Furthermore, while there is mention of a provided system library, it's not clear what services such a library would provide. C++ provides a standard library that allows basic OS operations, the immensely useful STL and a powerful stream I/O system with basic implementation for files and memory buffers. The Java core libraries go much further by providing classes for anything from data structures, to communications, to GUI. It is yet to be seen how C#'s system library would fare in comparison.

One thing that's sure to be missing from C#, and very sadly at that is any form of genericity. Genericity, such as it is implemented in C++, allows one to define "types with holes". Such types, when supplied with the missing information, are used to create new types on the spot, and are therefore considered to be "templates" for types. A good example of a useful type template is C++'s list, which can be used to create linked-lists for values of any type. Unlike a C linked-list that takes in pointers to void or a Java linked list that takes Object references, a list instantiated from the C++ list template is type-safe. That is to say, it would only be able to take in values of the type for which it was instantiated. While it is true that inheritance and genericity are often interchangeable, having both makes for a safer, possibly faster development platform.

The designers of C# have admitted the usefulness of genericity, but also confessed that C# is not going to support genericity on first release. More interestingly, they are unhappy with C++'s approach to genericity, which is based entirely on templates. It would be interesting to see what approach C# would take towards the concept, seeing as templates are pretty much synonymous with genericity at the moment.

To sum it up

Many now refer to C# as a Java-wannabe, and there is much evidence to support this notion. C# doesn't only borrow a number of ideas from Java. It seems to follow up on Java's sense of clean design. It's a somewhat sad observation then that C#, purely as a language, not only provides a fraction of the innovation and daring that Java did, it also falls just a little behind Java where cleanliness and simplicity are concerned. However, if you're someone like myself, who uses Windows as their primary development platform and needs to use C or C++ because he cannot afford the overhead that Java incurs, it's possible that C# would turn out to be a very beneficial compromise.

This discussion has been archived. No new comments can be posted.

C# Under The Microscope

Comments Filter:
  • by Slur ( 61510 ) on Wednesday August 09, 2000 @11:13AM (#866850) Homepage Journal
    I'm sorry, kids, but any language that is tied to a single OS - as C# is destined to be - is nothing but a scripting extension. I don't care how many bells and whistles the language has.

    C was created specifically to emulate a "universal cpu" and to make it easier to write software across platforms. C++ extended that mission with the added benefit of reusable code (cross-application). C# is a step in the wrong direction if it pretends to be a language in the same class as others with 'C' in the name.

    As far as I can see it's main innovations are little more than invisible methods.



    --------
    Yeah, I'm a Mac programmer. You got a problem with that?
  • by Kobes ( 66985 ) on Wednesday August 09, 2000 @10:26AM (#866851)
    Interestingly enough, this new language, "C##", has already been dubbed "D" by various industry experts.

    (Note: Having to explain this joke means it is a great failure.)
  • "I mean, what would be the advantage??"

    There probably won't be a technical advantage, but MS seldom creates things because they'll have a technical advantage; MS creates things that give MS a strategical advantage. In this case, MS needed something of their own to tie into application development for their .NET "vision". .NET has the potential to make vast amounts of money for MS (that will make their current revenues look like small change) but they need as much lock-in as possible. If a whole generation of programmers learns C# instead of C++ and Java (just wait for the deals that MS makes with Universities, it'll happen soon enough: "we'll donate hardware to your university if you teach C# in the courses") then those developers are primed for .NET development.

    Sure, MS could use C++ or Java for .NET, but then developer skills are general enough for those developers to use anywhere.

    Of course, it could be I don't know what I'm talking about, because it's difficult to really tell what C# will be about, what with all the hype surrounding it.

  • by Nigel Bree ( 45256 ) on Wednesday August 09, 2000 @10:30AM (#866861) Homepage
    First off, from the main article.
    > Now, Java uses mark-and-sweep in order to garbage collect.

    No, it doesn't specify any such thing. You're perfectly welcome to use any collection system for Java objects you like, including high-performance generational/copying collectors.

    As it happens, the bulk of Java implementations do use mark-sweep as part of a conservative collection approach, because of the need to interact with code that is not GC-aware. That's easiest to structure as a simple mark/sweep pre-pass to the real collection phase, which can do pretty much anything it wants that doesn't involve moving or freeing the conservatively blacklisted objects.

    I have a GC library for C++ I've written which works exactly like this - mark/sweep conservative phase, generational copy collector phase - and works just fine.

    As for the copy-by-value/copy-by-reference distinction, template library authors already make this distinction for performance (at least I do in mine) and provide simple ways to annotate classes so templates expand to by-reference versions. That said, the biggest problem with that is that even in MSVC++6.0, the template support is still so broken that you spent more time fighting internal compiler errors that coding :-(

    Back to the article being replied to:
    > Part of the uniqueness of C# is its conception of code reuse - for instance, instead of purchasing a commercial garbage-collector for your C++ code, you get one for free from C#.

    Huh? It's "unique" to do something that most every programming language outside the Algol/Pascal/C family has done from day one?

    > But where does this garbage collector reside?

    It's in the language run-time, which can be wherever the implementation gives you the option of putting it. Y'know, like malloc ().

    In case you've never seen one, a garbage collector is not a big piece of code - a simple but perfectly effective one is typically much smaller than the equivalent malloc () code. For high-performance allocator implementations (like the impressive Hoard from Paul Wilson's group at UTexas, where allocation performance of all kinds are studied), expect a GC and a manual allocator to be of roughly similar overall size and complexity.

  • Microsoft has just recently posted the same version of the .NET Framework SDK that they distributed at the PDC in July. The SDK is now freely available, and you can download the SDK preview [microsoft.com] from MSDN [microsoft.com]. (The download is about 86 MB, however.)

    The SDK preview includes a copy of the C# compiler with Win2k Professional. (Note that the SDK does not include the Visual Studio 7 preview, but it does include "ASP+, the Common Language Runtime, documentation, samples, tools, and command line compilers.")

    Microsoft also has some public newsgroups [microsoft.com] (hosted on "msnews.microsoft.com") for discussions about the .NET frameworks, C#, C++, VB, etc. And DevelopMentor is also hosting a .NET mailing list [develop.com].

    The August 2000 issue of MSDN Magazine [microsoft.com] is also featuring an article about C# [microsoft.com].

  • As I'm not an experienced VM or compiler programmer, I hope you'll al forgive me if this is a stupid question, but here goes: If the C# VM is basically a slightly tweaked version of the MS Java VM internally, then wouldn't it stand to reason that another VM could be adapted to handle C# code? How unique is the design of their Java virtual machine?
  • Such event members can be defined to take any delegate type. Delegates are the C# version of function pointers. Whereas a C function pointer consists of nothing but a callable address, a delegate is an object reference as well as a method reference. Delegates are callable, and when called, operate the stored method upon the stored object reference. This design, which may seem less object-oriented than the Java approach of defining a handler interface and having subscribers subclass the interface and instantiate a subscriber, is considerably more straightforward and makes using events nearly as simple as invoking object methods.

    Events are one example of how C# takes a popular use of pre-existing object-oriented mechanisms and makes it explicit by giving it a name and logic of its own.

    Except that it already existed decades ago in Lisp under the explicit name of "closures", exists also in Python under the name "bound methods", and exists in general in dynamically typed languages under the idea of "protocol". The difference is, in C#, you have type checking, so you have to declare the signature.

    Delegate is rather an example of how "Those who don't use Lisp are doomed to reimplement it." :-)

  • I think this whole "write in languages that are C, but easier" movement that's been going on for decades is a little weird.

    Decades? The first real language to fit this description is Java. Pascal came before C, if that's what you're thinking of. Modula-2 and Oberon were after C, but they descended from Pascal, not C. Ada? It wasn't nearly as low-level as C, and had more modern features (e.g. packages). Objective C? That was a merging of C and OOP that went down a different path than C++.

    Not sure what you're talking about here.
  • And there's the secret we've all been looking for...namely, why MS would go to all this trouble to create a Java-alike. Windows 2000! They need a way to strongarm developers onto the new system, and away from NT or 98. If the development tools are only available for Win2k, then you're going to have to upgrade if you want to stay ahead of the curve.
  • "Is there going to be a sudden demand for C# coders in a few years? if there is, maybe I should start learning it"

    I don't know what your philosophy is, but I wouldn't want to tie my career to one company. Computer Science is not supposed to be learning the language of the day, but the fundamental paradigms and algorithms to allow you to pick up any language.

    Though my personal preference is obviously Linux, in a pinch I could become a Solaris or Irix guy. With my UNIX background I can transition to NT easier than an NT guy could to UNIX. The tech world changes so quickly it is best to be flexible and to keep an open mind.

    The way I feel right now though, I'd rather take a lesser paying job doing UNIX stuff and be happy than make more money doing MS and be miserable. Each to his own though. :)

  • It is useful to note that much of what C# provides actually originated in the context of NeXT, the Objective-C language, and the associated operating systems.

    Inheritance and Interfaces

    Another idea that was lifted directly off Java, and one which turned out to be very controversial is that of multiple inheritance. In what seemed like a step backwards, Java did not allow you to define classes that inherit from one than one class. Java did let you define "interfaces", which work like C++ abstract classes, but were semantically clearer: an interface is a functional contract that declares one or more methods.
    Objective-C in the OpenStep/Mac OS X environment has single inheritance from a base class (NSObject), and protocols, which are precise counterparts to Java's interfaces. I have run into situations, however, where multiple inheritance is exactly what is required, and using interfaces meant that I had re-write the exact same code more than once, as I was implementing a group of specialized collection classes in Java. There were two axes of differentiation: mutability = (immutable, mutable), and ordering (partially ordered, ordered, strictly ordered). There was a lot of code that had to be duplicated that I should have been able to inherit from two abstract superclasses, one for mutability, and one for ordering. (*grumble*)

    Garbage Collection and Memory Management

    One new feature that I mentioned already was that of copy-by-value objects. This seemingly small improvement is a potentially huge performance saver! With C++, one is regularly tempted to describe the simplest constructs as classes, and in so doing make it safer and simpler to use them. For example, a phone directory program might define a phone record as a class, and would maintain one PhoneRecord object per actual record. In Java, each and every one of those objects would be garbage collected! Now, Java uses mark-and-sweep in order to garbage collect. The way that this is done is this: the JVM starts with the program's main object, and starts recursively descending through references to other objects. Every object that is traversed is marked as referenced. When this is done, all of the objects that aren't marked are destroyed. In the phone book program, especially if there are thousands and thousands of phone records, this can drastically increase the time that it takes the JVM to go through the marking phase. In C#, you'd be able to avoid all this by defining PhoneRecord as a struct instead of a class.
    Objective-C provides a semi-automatic reference-counted garbage collection mechanism that is amenable to programmer intervention to increase efficiency, through a construct called an Autorelease Pool. Every object has a retain count, which can be incremented or decremented. The object's retain count starts at one, and when an object's retain count goes down to zero it is garbage collected. Note that this happens the instant that the retain count drops to zero, not during a mark/sweep. However, you may need to pass an object on to another part of your app, but your code does not need/want to retain it. What you do instead is tell the object to auto-release. It is then put into the autorelease pool, and later on during the system's garbage collection each object in the autorelease pool is sent a release message. Some objects that are entered in the autorelease pool still have a retain count (as they are being retained by other objects) and are simply removed from the autorelease pool; others have their retain counts drop to zero and are garbage collected.

    You can fine-tune this mechanism to a high degree, by putting your own autorelease pool in the stack ahead of the system's primary autorelease pool. For instance, suppose you know that you will be allocating a whole bunch of objects for use in a part of your program, and after you exit you will never need them again. Well, you can put your own autorelease pool in for the system's autorelease pool at the start of that section of your code, write normal code, then remove and release your private autorelease pool and put back the system autorelease pool, which release all of the objects you created in your little section of code. Conversely, if you want an object to stick around, just don't ever release or autorelease it.

    However, from a business standpoint, I find that the automated garbage collection and never having to worry about memory allocation issues is a strong point of Java. It allows me to code more complex applications and avoid memory debugging issues that invarable bedevil complex Objective-C and C++ programs. I can get a WebObjects application to a customer much more quickly using Java than using Objective-C, with quicker turnaround and more feedback cycles.

    Events, Notifications, and Delegation

    Personally, I found C# support of events to be a very exciting new feature! Whereas an object method operates the object in a certain way, object events let the object notify the outside world of particular changes in its state.. A Socket class, for instance, might define a ReadPossible event or a data object might release a DataChanged event. Other objects may then subscribe for such an event so that they'd be able to do some work when the event is released. Events may very well be considered to be "reverse- functions", in the sense that rather than operate the object, they allow the object to operate the outside world, and in my programming experience, events are almost as important as methods themselves.
    The OpenStep and Mac OS X operating systems (viewed separately from the Objective-C language, as these features are available from Java as well) have long had notifications and delegates. There is a system-wide notification center, objects can define notifications that they will post in response to certain events, and objects can register to receive particular events or classes of events. This mechanism has been in place for a long time.

    Delegation is a bit more tightly tied to Objective-C, as objects in Obj-C can pass messages (i.e. method calls) onto to other objects, and objects can "pose as" other objects. An object can register to be the delegate of another object (in Java, the delegator object needs to make special provision for this), and there are "informal protocols" or "informal interfaces" defined that indicate the possible messages a delegate might receive from its delegator. Again, this is not new, and its assembly into a single OS is not new.

    Primitive Types

    C# classes are also all eventual descendents of the object class, but unlike Java, primitives such as integers, booleans and floating-point types are considered to be regular classes.
    This is one feature that I like very much, and wish that Java had. Objective-C, of course, will always have to support native types such as char's and int's, as it is defined as a superset of C. However, Java had the opportunity to remove this artificial distinction, and has caused lots of cursing from yours truly over the past couple of years.

    Compiling to Native Code

    It's not yet clear whether C# programs would need the equivalent of a Java Virtual Machine or whether they could be compiled directly into standalone executables, which might positively affect C#'s performance and possibly even set it as a viable successor to C++, at the very least on Windows.
    I would point out here that compiling to native code may not result in the fastest execution. Review the HP Dynamo project [arstechnica.com], as written up on Ars Technica, for the reasons why JITC can actually exceed the speed of native code. The whole Transmeta Crusoe architecture is built around this theory of operation, and no one will claim that it's too slow.

    Genericity

    One thing that's sure to be missing from C#, and very sadly at that is any form of genericity.
    Amen to this. The fact that genericity is missing from Java is a serious gripe of mine, and the fact that it is missing from C# is a serious omission. This business of casting objects coming out of arrays is a pain the in neck, and it is often tough to find out where an object of the wrong type went into an array, although on the cast coming out you get a ClassCastException. Far better to catch the problem when the object goes in, which often gives you a better idea of where your design is broken. One of these days I am really going to have to start using the stuff coming out of the GJ project [bell-labs.com].


    Conclusions

    Overall, I find that the "new" stuff in C# is really old stuff. Furthermore, this is not the first time that all of this has been pulled together in one place. Almost all of this has been in the NeXTStep/OpenStep/Mac OS X family for a long time, and the implementations there are quite mature. I suspect that the implementations in C# will require several revisions before they reach the levels that programmers can really use.

    Just so everyone knows, I am a Consulting Engineer working for Apple iServices, a part of Apple Computer, specializing in WebObjects development. These opinions are my own, however, and not those of Apple.


    --Paul
  • by Anonymous Coward on Wednesday August 09, 2000 @10:36AM (#866876)
    I wrote a graphical editor in Java. On a 1024x768 display (modest for the times), the performance penalty of Swing made some parts of it barely usable on a dual P3-550 Xeon box with a TNT2.

    This was not sloppy code. I made extensive use of caching, pre-rendering and Java2d to make the most of the platform. Java simply has performance overhead -- the overhead of Swing (sloppy, sloppy -- it's not even well built, consider for instance that affinexforms were hacked on but don't apply heirarchically and thus can't zoom Swing UI elements) is huge, but there is overhead in JAva2d (JNI is slow, especially for copying chunks of data back and forth) and some additional overhead in the basic design.

    In Java, it is almost impossible to write cache-friendly code. If you build things in an OO fashion, you cannot force locality, since object refs force you to essenaitally chase pointers for every object. If you write degenerate code that isn't OO (sort of misses the point) then the array bounds checks hammer you anyway (and no, I have yet to see this eliminated by "smart compilers").

    Java has some inherent problems with performance. These are real, they exist, and they are fundamental to the platform.

    Consider that in JAva, you have to have a thread per socket connection. Yes, I'm serious. There is no select, there is no poll. This means that a messaging server on Java can maybe serve 3000 clients before it starts to fall apart, but something in C++? Trivial to serve 20,000. You don't even need to optimize it to get that level of scalability that even optimized JAva can't do.

    Consider the weirdness that Java can spawn a child process but not attach to a process that's already running (easy to do in C [C++, C, C#]). How do you write a watchdog process in JAva whan you can't kill a process that's hung?

    Java is great. I've used it extensively. But it is seriously warped in some ways.

    RSR
  • Granted Java has an overhead, but it's getting better and better every release. 1.3 with the new VM, hotspot, is incredibly fast. I wish i still had the url of the benchmarking of it. It beat C++ in speed on a couple of tests, granted they were the recursion which C++ doesn't handle well, but Java is getting much better. C# is going to have to show me something besides the speed factor to get me to switch.
  • How do you figure? C++ templates are a hack to deal with the lack of a base class; other OO languages (Java, Objective-C, C#,Smalltalk...) let you do generic things by operating on the base class, from which all objects inherit.

    Casting from Object is not comparable to C++ templates. Firstly, you lose any compile time type checking (which C++ templates give you). On the contrary, the Java approach is the hack. Don't take my word for it, read interviews with Gosling where he admits as much.

  • Part of the uniqueness of C# is its conception of code reuse - for instance, instead of purchasing a commercial garbage-collector for your C++ code, you get one for free from C#. But where does this garbage collector reside? Is it in a shared library? If so, where and when does it get called? Is it a seperate process fork()ed off from the main process? Does the collector get compiled in to each and every program? Is it part of some system-level component that will be built in to the next Windows, that Linux will have to emulate? Inquiring minds want to know...
  • by Dr Caleb ( 121505 ) on Wednesday August 09, 2000 @09:14AM (#866883) Homepage Journal
    Perhaps it should be reviewed with a tuning fork?

    From all the reviews I've read, "See Sharp" doesn't.

  • by Happy Monkey ( 183927 ) on Wednesday August 09, 2000 @12:58PM (#866886) Homepage
    I shudder when I think of programming with significant whitespace.

    You may not always be safe in C++ [att.com]. (Acrobat format ;)
    ___

  • by NaughtyEddie ( 140998 ) on Wednesday August 09, 2000 @12:59PM (#866887)
    I won't touch C# with a ten-foot barge pole for at least 2 years. That's just common sense. Nothing out of Redmond ever works until at least service pack 4.

    Other than that, you're just regurgitating the typical boring Slashdot opinion set in a highly overdramatic style. To call a language dangerous, a corporation evil and to literally identify a real manager with the PHB from Dilbert is at best inaccurate, and at worst displays a shocking seperation between you and reality. The world is not as black and white as you would like it to be.

  • As with most things Microsoft, they are striving for easy development... performance is a secondary goal.

    Concerning the ASP+ stuff, though, there are a ton of things built in to increase performance. For example, current ASP pages are scripted... they are interpretted, an execution path stored in memory... when the cache maxes out, the plan for the least used script is dumped and if that page is visited again, it needs to be re-interpretted. ASP+, however, creates a "compiled" version and caches it to disk. (The compiled version is the bytecode in the .NET CLR.) For a good article on ASP+ be sure to check out this piece I wrote:
    ASP+: An Introduction and My Views on the Matter
    http://www.4guysfromrolla.com/ webtech/071500-1.shtml [4guysfromrolla.com]

    Happy Programming!

  • It might have something to do with the fact that a lot of us might have to learn this language.
  • Comment removed based on user account deletion
  • Hate to burst your bubble, but:

    • New programming languages are not dangerous
    • Microsoft are not evil
    • Dilbert doesn't exist
    • Real managers are not PHBs
    • And "mindshare" is a buzzword
  • http://cm.bell-labs.com/cm/cs/what/smlnj/

    Many people I know here at CMU found ML tough to wrap their heads around. I think it is a wonderful language, and I plan on using it for as many projects as I can in the future.

    Ben
  • by Ececheira ( 86172 ) on Wednesday August 09, 2000 @10:45AM (#866902)
    From what I've seen in the C# Tech Preview SDK, C# does compile into an intermidiate language (IL), but it never actually runs as IL. Instead, the installation process compiles the IL into machine-specific code that is optimized for the specific platform its running on. Once the code is installed, the byte-code IL can be thrown away.

    What it seems like is a cross-platform distribution but with native compilation upon installation. Sorta a best of both worlds kinda thing...
  • by Spankophile ( 78098 ) on Wednesday August 09, 2000 @11:33AM (#866904) Homepage
    It should have been called:

    C~1
  • Skinner: "We need a name that's witty at first, but that seems less funny each time you hear it.
    Apu: "How about "The Be Shaprs?"
    Skinner: "Perfect."
    Homer: "The Be Sharps."
  • How often do you hit Enter followed by Tab, only to realise that MSVC has already indented correctly for you? Yes! It drives me nuts. That's why I use the VisEmacs plugin [atnetsend.net] for anything more than trivial editions. I keep the MSVC editor for when I'm debugging (which isn't really editing I suppose). This plugin is great. It's a god send.
  • There is Java Grande [javagrande.org]. These guys are working on making Java more suitable for number crunching and similar jobs. They've contributed to StrictMath and suggested the strictfp modifier (IIRC)... If they thought number crunching couldn't be done fast enough, they'd never started the project. It seems like the interpretation / just-in-time compilation part of Java doesn't have too much of an influence on performance with these kinds of applications.
  • by Lechter ( 205925 ) on Wednesday August 09, 2000 @10:47AM (#866912)

    Microsoft says it's supposed to be pronounced C (sharp). But I've almost always heard "#" called the hash mark. Regardless of what the PR folks say I believe that M$'s developers really meant it to be pronounced see-hash. Could this indication of an obsession with pot among Microsoft's developers be an explanation of the buggy history of Windows? I don't know but it does explain things...

  • Heh. With all the work I do with shells and perl when I first read the language name I thought it would be read as "See Comment".
  • I wonder this very same thing everytime I start up a Smalltalk image to get some work done. So many of the "innovations" of Java and C# are naught but features of Smalltalk with C's nasty syntax applied. Then you have abonomations like C++. I'll never understand the motivation behind that. I usually chalk it up to the fact that the average programmer's brain isn't flexible enough to look at non-BCPL/Algol syntax. Sounds elitist, but I'm at a loss for any other explanation.

    Thanks for the pointer to Cugar- I've not looked at it yet, but hopefully I can use it to escape the hell known as C++ this coming school year... Bloody professors won't let me use Objective-C or Smalltalk, but I'll show them! :)
  • by Ars-Fartsica ( 166957 ) on Wednesday August 09, 2000 @10:49AM (#866918)
    I agree completely that you can get by doing anything with C, perl, and maybe some shell scripting thrown in.

    I've done extensive Java programming since it was v0.9, and C++ programming for about six years, and my opinion is that most of the OO stuff is complete mumbo-jumbo that only serves to confuse the core programmer and others who try using their code.

    One rule that has served me well throughout the years is that one should never use a tool more complicated than the problem demands. Many OO programmers throw this rule out the window, and spend weeks playing with Factory patterns, polymorphism, huge inheritance hierarchies, and all sorts of other junk that creates bloated, useless code.

    At the very least, C++ allows me to limit the amount of OO I introduce into programs. Java seems to be as retarded as Smalltalk when it comes to this.

    Even for "internal" programs that don't require full-out performance, I can bang out a perl solution in half the code it takes a Java programmer do write. I have to wonder how Java programmers keep from going insane. The language and object hierarchy are so verbose that it takes at least twice as many lines of code to get anything done as any other language, and then the speed sucks. Rant off.

  • I see a lot of interesting issues being partially raised here, and by the original article.

    There is indeed overhead involved in a reference, but the hope is that you only have to handle "large" objects by reference. Things like numbers are indeed "atomic" meaning that one use of the number 5, for example, cannot be distinguished usefully from another instance of the number 5. It doesn't make sense to say "change the value of 5 to be 6." Instead, we have a place that can hold numbers, and we change the value in that place to be the number 6 instead of 5. In that sense, a numeric variable is a "reference" to a location in memory, which can contain "values" although we never use those terms in C, as there is no pointer indirection or overhead involved.

    Now, the problem comes in when you talk about larger things, like, for instance, a 3-D rendered scene. Say I have a "scene" which contains a bat and a ball. If I "duplicate" the scene, so I have S1 and S2, and I move the ball in S1, does it move in S2? Hmmm. It depends on what you did/meant when you "duplicated" the scene. Is it the "same" scene, meaning you scratch one and the other bleeds? Or is it a new scene, which just happens to look the same? The same philosophical problem arises when you pass parameters to functions. Do you "duplicate" the argument, or not?

    The point is that a language implementation pushes bits and bytes around. However, a programmer is managing abstractions, and the handling of those abstractions CANNOT be specified as part of a language definition! It depends on the programmer's intent!

    This is another mess that C++ got into when it had to manage assignment and initialization of classes: "copy" constructors for all your classes. But what does it mean to copy? It depends! Not only does it depend on your class, but it also depends on how you want to use it, which can change!

    A side effect of this is that passing arguments around can involve a lot of excess "copying," if you insist on "copying" arguments before you pass them. C++ has to do this, because C did, except when you specifically ask for references. Now, if you have garbage collection, all these excess copies, most of which are soon discarded, need to be cleaned up.

    I guess this is why the author here worries about garbage collection in his phone number instance. In principle, once you have everyone's phone number, you don't need to allocate any more of them, and unless people go away, or change phone numbers, you don't really need to throw them away. Unless you are in C++, so you have to keep "copying" them onto scraps of memory to send to subroutines, and then discarding the scraps as soon as it returns.

    As a side note, mark-and-sweep is usually the worst possible garbage collection algorithm. You have to look at everything, even if most of it isn't garbage anymore. Much better is "generational" garbage collection, which mostly looks at recently allocated stuff. The idea is that if stuff has been held onto for a while already, it probably is still being held onto, while it is very common for things to be allocated, used only a little bit, then discarded. This can be very efficient GC.

    The problem with people's perception of GC is that it happens behind the scenes. "malloc()" and "free()" are right there in your face. In fact, they have so few characters in them, "they must be fast." Of course, as soon as you fragment your arena, malloc can get slower, and slower, and slower....In Windows, where programs typically don't live long, you don't see this. But in the world of serious applications, if you want your program to run for weeks or years, this can be a real problem. Garbage collection on references can then be MORE EFFICIENT than manual collection, not only because it doesn't "forget to free()", it can also, when dealing with references, rearrange memory to be more compact, and therefore localized for cache issues.

    Of course, I don't think C++ or C# garbage collection can actually do this, because when you move objects, you have to go back and change all the pointers that pointed to it, which in Lisp is easy, because the machine can tell a pointer from an integer, but in C-based languages is hard, because pointers and integers are both just piles of bits. But hey, that's what you get for programming in object-oriented portable assembler.
  • You obviously don't use Emacs as your editor.

    Re-indent the whole file
    M-<
    C-space
    M->
    C-M-\

    Emacs was designed for using with Lisp... now that's a language with crazy matching of brackets. Everytime I close a brace, paren, etc, the cursor shows me where it matches, or outputs the line in the mini-buffer. If I felt like it I could also have it automatically highlight the whole region between brackets. Really, bracket matching is irrelevant.

    Personally I think that people who code in C/C++/Java/etc without using any braces for one line blocks are bad programmers. They're just asking for a bug to be introduced at a later date. Hence I don't do, and neither do a lot of the people I've worked with. It's just lazy.

    I kind of like the braces (opening brace on a new line please) as it helps space the code vertically and provides an easy way to search for the end of a block (I'll leave coding for a 25 line screen to Linux kernel people and their crazy coding standards - bunch 'o masochists that they are!) I don't know how you would do that if the block was specified by indentation.
  • C# exists... I've created ASP+ pages using C#

    Go off and download the .NET SDK @ http://msdn.microsoft.com/net/ [microsoft.com]...

  • Well, according to INTERCAL notation, which I'm sure we're all familiar with, it's a "mesh". :)
  • It's no surprise that it sounds like Delphi! It was, after all, architected by the same guy who architected Delphi...
  • I work with C, Java, and Perl all the time. I shy away from Java for most major apps though because its a HUGE memory hog and incredibly slow. However, I use it over C++ (when I can't use C easily) because of the elegant design and garbage collection sacrificing speed. This Napster guy makes a point about a speed increase because of the use of copy-by-value and also talks about how cool it is that an "int" is an object. How is this good? If every object is unique, then copy-by-value is good, but the second you have a duplicate, you start eating up memory. An int may only take up one or two bytes of memory in C, how much is an Integer class going to take up in C#? For something as large as the phone database he mentions, thats a lot of wasted space. To me, it looks simply like a fully compiled, less elegant, single platform version of Java. If Microsoft wants to do something, improve the Java VM, don't waste time on languages that are going to be used regularly by five people in the world. ( See ADA, Eiffel, Lisp ) -gtaluvit (prnc. GOT-ta-LUV-it)
  • I spotted that one. It's garbage. If anything has a reference to an object, it won't be collected, and if nothing has a reference to an object, it can't be used. Garbage collecting is 100% robust in this aspect.

    Mind you, this is one of Napster's programmers - the company famous for "taking existing filesharing software and making it work with MP3s" - about on a par with Baby's First JavaScript.

  • by baka_boy ( 171146 ) <{lennon} {at} {day-reynolds.com}> on Wednesday August 09, 2000 @01:36PM (#866932) Homepage
    Having had some experience with (shudder) VJ++ development, I would guess that the answer to your question about native code would most likely be: "If you include native code, then your IL will be completely tied to the platform(s) you compile for." Such is the way of "unsafe," low-level code, right? You can't guarantee it'll port, so a given VM can only allow it if it's been compiled for that specific architecture/OS combination. Just look at WFC in J++ -- if you use them, your Java code might as well be VB, because it sure as hell ain't gonna run on anything besides 32-bit Windows systems.

    Of course, Microsoft isn't exactly the only group doing this. As much as I may like the looks of OS X, the development environment is, once again, highly dependent on a number of proprietary, platform-specific libraries and services. Linux and the rest of the UNIX-esque system benefit from the basic POSIX standard, but I think what we're seeing more and more lately is that that's not quite far enough these days. If the UNIXes of the world can't come up with a system that's as brainless to use as Visual Basic, Microsoft will continue to lure developers who can't, don't want to, or don't need to learn the intricacies of OO, and just want to quickly build applications with the benefits of pluggable components.

  • I don't know enough about C# to judge yet, but it's clear that you aren't taking a thinking-persons stance here.

    Let's deal with your points 1 by 1.

    1: It's been shown a few times that incremental languages are successful and revolutionary ones aren't. C++ was incremental on C. Eiffel was revolutionary. Which is in greater use. Leave the revolutionary stuff to experimental languages but be clear that ideas extracted from those languages and implemented in a incremental way are ideas that are successful.

    2: Schizo Gerbil. Jeeze - get a grip. Does C++ keep all semantics the same as C? Nope. Does Java?

    3. Non-open runtime. Really you don't know because the language is not ready for use yet. However Anders has said that the lnguage and all the other MS languages will compile down to run on top of the .Net runtime engine. He has also said that the language will be submittes to ECMA, so you can reasonably expect runtimes to exist for other platforms.

    4. No real standard. They have said they will submit to ECMA.

    Get some balance dude.

  • As programmers, I think our next big challenge is to remove the inherent religious problems with "whitespace" in programs entirely, by presenting a programming system where the programming is truly done at a conceptual level, and not at a textual level. Pick UML, or (relatively) standard flowcharts for procedural decompositions, and work graphically. Initially, you could "render" your program in a variety of back end languages (c, c++, java, perl, tcl, python, etc.) for execution. Eventually, you could generate machine code directly, libraries for "packages" and binaries for executable programs. Then all we can complain about is the expressivity of the graphical programming environment, and not meaningless trivia like syntax, whitespace, and so on.
  • Don't take this as a flame, because I am not trolling. I am just surprized that someone with 6 years of C++ and Java. Maybe there is a lack of OOA and OOD in your shop to benefit from OO implementations? Thoughts?

    Well, I admit C++ is less of an offender than Java, as it isn't prompting you to use OO whether you want to or not.

    As for a "lack of OOA and OOD" in my office, I'm sorry, but I've heard this one a thousand times. Typically, I find the worst offenders are the ones with the most books and training under their belt - they are even more likely to employ specious OO methods where they aren't needed. The problem is, most of the OO training and literature still isn't frank enough about the success rate of these methods, and what domains they handle adequately. The party line still seems to treat OO as a silver bullet that we could all use to save ourselves if we weren't so stupid.

    Obviously this is anecdotal evidence, but I still haven't heard of one shop who has gone totally OO and is better off for it.

  • by mouseman ( 54425 ) on Wednesday August 09, 2000 @01:40PM (#866937) Homepage
    The designers of C# have admitted the usefulness of genericity, but also confessed that C# is not going to support genericity on first release. More interestingly, they are unhappy with C++'s approach to genericity, which is based entirely on templates. It would be interesting to see what approach C# would take towards the concept, seeing as templates are pretty much synonymous with genericity at the moment.
    As others have pointed out, parametric polymorphism / generic classes are supported elegantly in other languages, such as ML and its variants, without resorting to the hack of templates. But I've only found one post, by an AC, score zero, buried deep in one of the threads (alas, no moderator points) that mentions Generic Java (GJ) [bell-labs.com]. GJ provides an elegant solution to adding generics to Java, and may find its way into future Java specs. It essentially works by having the compiler do type checking based on the generic types, but then translate to standard Java by "deleting" the generic parameters and inserting type casts where appropriate. The compiler guarantees that none of those type casts will ever raise an exception. The result is that where C++ templates result in code bloat by producing a copy of the generic class for every concrete instance, the GJ approach yields a single class, but type errors are still caught at compile time as they should be. The rewriting approach also ensures compatability with the JVM, since it compiles down to pure Java (with the addition of a little glue code). The catch is that there is no run-time information about the parameters of generic classes, so explicit runtime type checks (instanceof, etc.) can't be used for parameterized types. There was a nice article about GJ [ddj.com] in the Feb 2000 Dr. Dobbs's

    I've used GJ quite a bit, and I'm quite happy with it. Furthermore, there's reason to hope that code written in GJ (the syntax of which is similar to C++ templates) will be compatable with future versions of Java, since Sun is looking into adding genericity to Java, and looking at GJ in particular.

  • Personally, I avoid all of this by using the C indentation mode in Emacs, where "tab" means correctly indent the line, not insert five spaces, so I can quickly check for errors like those by hitting tab on each line.

    (Right now I am teaching myself Visual C++, and the hardest thing about it for me is getting used to MS's editor, not the syntax!)

    LL
  • The parent post was making lots of sense until I read this bit:

    Since it does things like treat "=" as comparison in conditionals and assignment in statements, as well as the whole whitespace formatting thing, it totally spoils you for writing in things like C and Perl.

    I'm sorry, but that is just not true. In Python, '=' is assignment and '==' is comparison, just like in C, C++, etc. What you cannot do is do an assignment and simultaneously treat the rvalue as a boolean in a conditional. In other words, if you mean to do this:

    if a== 6:

    print 'a was 6'

    If you had you accidently typed this:

    if a = 6:

    print 'a was 6'

    Python stops with a 'Syntax Error' exception. If you make the same mistake in C, if would happily overwrite the previous value of a and print a was 6. Lots of fun to debug .. not!

  • by spectecjr ( 31235 ) on Wednesday August 09, 2000 @11:57AM (#866946) Homepage
    They had a look "under the hood" of the Virtual Machine only to discover that it looked *strangely* just like MS's Java VM. Apparently they changed the variable/function names but the programmer who was taking a look said the code itself looked the same. They commented that they could actually run Java code on the system without problems, providing it didn't refer to any of the special Java class libraries.

    Uhuh... right...

    The runtime for .net is NOT the same as the Java VM; it's not even the same codebase. It was a completely different team who didn't use any of the JVM code.

    Simon
  • Perhaps the more fatal flaw is that you refuse to touch it because of that attribute (significant whitespace). While Python isn't my favorite language (for other reasons), the whitespace-as-syntax has never been a problem for me or the other Python programmers out there. In fact, I've yet to talk to someone that hates the whitespace-as-syntax whose actually done more than go through the tutorial and a couple trivial scripts. Try it out for a while before you write it off as a design flaw.
  • For someone who clearly knows a bunch about a bunch of different languages, Mr. Arbel doesn't seem to know much about garbage collection in general, and Java/JVM GC in particular.

    The JVM is not required to do mark-sweep GC. The JVM spec ifically [sun.com] leaves the implementation of storage management unspecified.

    This is good because it means that Java can use modern, higher-performance GC strategies like stop-and-copy or generational GC, both of which have been in use in Lisp and Scheme systems since the 1980's. I strongly suspect that C# will have to use mark-sweep or some other non-relocating GC, since you're allowed to go down below it to assembler, exposing pointers that might need relocation.

    Do most JVM implementations really still use mark-sweep GC? Despite James Gosling and Guy Steele both being ur-Lisp hackers?

  • by DragonHawk ( 21256 ) on Wednesday August 09, 2000 @05:34PM (#866954) Homepage Journal
    It is interesting to note that two of the "new" things that make the article author so excited have already been done ny Borland, in C++Builder, while retaining C++ compatibility. (Well, okay, they added a couple keywords, but marked them as implementation-dependent using the ANSI standard double-underscore prefix.)

    Personally, I found C# support of events to be a very exciting new feature ... an object reference as well as a method reference.

    C++Builder has been doing this since day one, with what Borland calls a "closure". You use a new keyword, __closure, to declare a pointer which points to a member function of a specific object instance. Not surprisingly, Borland uses this to drive the entire event system in their GUI framework. It rocks.

    Properties are another example, even though they're not as much of a labor-saver as events are.

    Again, Borland has been doing this since day one. The keyword __property can be used to declare object members which appear to be simple variables to "outsiders", but do magic when read or set.

    Once again, Microsoft fails to innovate, but instead steals from elsewhere.
  • HotSpot is *extremely* good.. probably a fair distance along the way to optimal, given its ability to do things like deep inlining across several layers when the runtime history indicates that path is very hot. The reason Java code still runs slower than C++ is partially due to the HotSpot overhead and partially due to the fact that the way you write code in Java is often much less CPU efficient than if you were to write code with similar intent in C++.

    In C/C++, you would parse a file line by line by constantly re-using a single memory buffer for each line of the file that you read (sizing it up if it overflows, of course). In Java, since Strings are immutable (for thread safety), you wind up creating a new String for each line, plus a new String for just about every subelement that you pull out for further processing. This sort of style is mandated by the fact that so much of the Java class library API's demand real live immutable Strings, and you don't have a choice but to create a bazillion of them in many cases.

    With HotSpot, creating new objects off of the heap can be very nearly as fast as stack-based allocation of auto variables in C/C++, but it's just not going to ever be as fast as intelligently re-using a memory buffer. There are many things in Java programming that force that kind of trade-off. The benefit is that you get a lot of aid and encouragement to make your code thread-safe and that it is actually possible to guarantee that an object's private state can't be trashed by anything external to it, never no way no how. A completely reasonable trade, in my opinion, especially in light of the massive portability support provided by Java, but it's not The Answer To Everything, of course.

    But HotSpot itself is some impressive shit.

  • So is it feasible that I could define an object in C# that fires a ReleaseMe event when it's finished with itself, at which point it gets garbage collected?
  • Nice post. Its unfortunate that even with Apple's backing, Objective C is probably off the radar for good. I can't see it regaining much mindshare with all the hype Java is getting, and with the vigorous interest in scripting. This isn't the first language tragedy.

    As for JITC, prospective apostles of this technology should try it out with real programs and do extensive benchmarking - indications are that HotSpot is still two to four times slower than C++. Most claims I have seen for JIT code is based on in-memory operations with very little IO and/or user-interaction.

  • And then there's Lisp! I've seen benchmarks where good tail-recursive Lisp code beat FORTRAN at some numerical tasks.

    Okay, obligatory plug over... =)

  • What's hearsay? My comment or the other guy's?

    Which of the two of us worked on the .net team? (Clue: Not him).

    Simon
  • That's right except for a few things:

    the output would be "21" and "10" if it is passed by reference, not "10.5" (integer division)

    C and C++ pass by value by default, and Java passes primitive types (int, double, char) by value, but Objects by reference
  • by pb ( 1020 ) on Wednesday August 09, 2000 @09:16AM (#866969)
    There's a C library that does garbage collection already. Actually, I think there are a few of them.

    And it's a shame to not see good template (genericity?) support in C#. Or any language, for that matter.

    I think choosing a good type system is where a lot of languages fall flat, and I'm not a big fan of the huge C++/Java Object/Type/Library approach, although I haven't seen a truly good solution to this problem yet. C, Pascal, Java, Perl, Scheme... They all have different ideas and solutions, and I haven't seen a "Right Way" yet. Although I think Scheme has the right idea with its first class data types, it still all needs some work.
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Events? Attributes? Garbage collection based on copy-by-value?

    The similarities to Visual Basic are eerie. It sounds a lot like they looked at the way people were using VB and incorporated those ideas plus improvements people have been asking for into a package that is more 'programmerly'. Anyone want to place any bets that this is the heir apparent to VB?

  • I think it's an unfortunate misconception that C is the ultimate "power language" because it gives you so much "control".

    I would argue that efficiency and performance are more likely arguments for C.

    (Eiffel, Haskell, ML come to mind)

    Haskell in particular is cool, but I think you do these languages a disserive by mentioning them in the same post as Java - while they all require a new way of thinking, none of them has nearly as much mumbo-jumbo associated with them as Java. In fact, I see the simplicity of Haskell as its key advantage.

  • I found the comments made about this language feature in the article somewhat naive. It is well known that C++ templates, being pure textual substitution, do not guarantee type safety. For examples, see "Object Oriented Type Systems" by Shwartzbach and Palsberg. In that book, you can also find a viable alternative (and a nice proof that inheritance and genericity are orthogonal mechanisms which commute). Matt
  • "For serious number-crunching, using JNI to hook into some optimised native libraries, which can be built with a minimum of platform-dependant code if you don't count Makefiles, all-but-negates any speed loss that going with a purely Java solution would give you."

    I've always been partial to CORBA as a solution to using native components. It doesn't matter where and how your components are implemented. We were doing stuff using the TAO ORB for our C++ servers (and Visibroker in Java). For what we were doing, CORBA is so fast that it wasn't really noticeable if my CORBA servers were at the other end of a dial up connection!
  • that one design flaw (making whitespace significant) has kept me from wanting to touch it. It would be okay for the compiler to generate a warning for incorrectly-indented code, but to generate incorrect code instead is simply inexcusable.

    I think this is a common poor analysis that reads the situation backwards. In reality, code is almost always indented "correctly" according to what the programmer intended, errors arising from incorrect indentation are generally due to the programmer failing to insert the braces in the correct positions, and thus don't exist in a language like Python. So "to generate incorrect code" due to a formatting error is simply an impossibility, unlike in C.

    Whitespace formatting is instantly visible, that's why people indent their code even when it's insignificant. Braces, OTOH, are very hard to keep track of. When the whitespace isn't used by the compiler, that means you're using one technique to give this information to a human reader (including yourself), and another to give the information to the compiler: a sure recipe for errors.

    How often have you seen C bugs due to missed semicolons and braces? Part of this

    if(x==y)
    doxeqy();

    Then you realize you had to do more for that condition:

    if(x==y)
    doxeqy();
    doyeqx();

    Whoops! It looks okay, but of course it's not.

    Or how about this classic mistake?

    if(x==y)
    if(t==u){
    doxyandtu();
    doxyandtu2();
    doxyandtu3();
    }
    else{
    donotxy(); /*d'oh!*/
    }

    Hmm... looks okay, compiles fine...

    Sure, they're goofy mistakes, people make them all the time! Human minds are terrible at diligence tasks (when they have to remember to do something and nobody is reminding them to do it). Of course you're going to forget to put in braces sometimes! Why not design the language so your first impression is always right?

    In practice nobody ever fobs up a Python script with something like:

    if a=b:
    if c=d: do_aeqb_and_ceqd1()
    do_aeqb_and_ceqd2()

    It's immediately apparent from the indentation that the second function call is in the "if a=b" block, not the "if c=d" block.

    However, I do think that pushing Python as a teaching language is a terrible mistake. Since it does things like treat "=" as comparison in conditionals and assignment in statements, as well as the whole whitespace formatting thing, it totally spoils you for writing in things like C and Perl. Even experienced C programmers often forget things like semicolons and mix up comparison and assignment, people moving from Python to C just have a terrible time.

    ---
    Despite rumors to the contrary, I am not a turnip.
  • by Chiasmus_ ( 171285 ) on Wednesday August 09, 2000 @09:19AM (#866978) Journal
    I think this whole "write in languages that are C, but easier" movement that's been going on for decades is a little weird.

    If I want to use a medium-level language because I want absolute control and optimized speed, I'll use C. I don't want an "almost-medium-level-but-a-little-higher-than-that -level language". If I was looking for ease of use and didn't care about optimizing, I'd go with PERL, or, hell, even Quickbasic.

    Granted, there's a need for these "weird-level" languages, and some people love them - but I think that C++ and Java nicely fill the niche. So, my first thought, which is even more valid, I think, in the face of this review, is "Why does Microsoft feel almost obligated to make an M$ version of *everything*??"

    For GUIs and money managers and anything else aimed at "my mom", Microsoft is guaranteed to reign supreme, because "my mom" doesn't really care about performance issues or security or any of that. But my hunch is that, in light of some of the bugs and general ickiness covered in this review, few people are going to want to switch over to C#. I mean, what would be the advantage?? If you already write C++ and/or Java, why would you want to start writing stuff in C#? I just don't understand.
  • by HeghmoH ( 13204 ) on Wednesday August 09, 2000 @09:19AM (#866979) Homepage Journal
    Why do we need yet another object-oriented C? We already have C++ (fast but crappy) and Objective-C (slightly slower but way better), what does this C# thing add? I saw no compelling features in it that Obj-C doesn't already provide.
  • by KFury ( 19522 ) on Wednesday August 09, 2000 @09:20AM (#866983) Homepage
    (I know I'll get thrashed for this, but my karma can take it)

    It seems to me that creating a new 'standard' language, which neverltheless relies heavily on COM and .NET ties which only exist on Windows, is in part a tactical method to inhibit migration of Windows products to other platforms.

    Let's say that C# is simply a better language to program for Windows than C++ is. Let's also suppose the hypothetical case where new Windows functionality comes along in future Win versions, and that this functionality is more easily taken advantage of using this new C# language. This gives developers the incentive to code new Windows products in C#. Note that C# has substantially different enough structures that porting from C# to C++ would not be trivial.

    Now suppose that Linux (or another OS) starts gaining prominence in the next 2-8 years. As with any new OS, its main barrier to entry is lack of software. (The only reason Linux is viable is because of all the UNIX software it inherits.) In this time, Microsoft's pushing of C# has created a new software base for Windows that is relatively locked into place, unable to be ported to other platforms without significant effort.

    Now I'm not saying this is evil. I'm not saying it's a conspiracy. Often languages built for specific environments are superior tools in those environments specifically because they're specialized.

    It's just something to be aware of.

    Kevin Fox
  • I really have to wonder whether the author understands garbage collection. For example:

    there have been a few instances where I was faced with a programming problem where the solution depended on objects *not* being automatically destroyed, as they were supposed to exist separate from the main object hierarchy and would take care of their own destruction when the time was right.

    WTF? Objects will only be collected if they're unreachable, and if they're unreachable, what do you want them hanging around for?

    Also, the whole thing about C#'s "structs" seems a bit dubious to me:

    One new feature that I mentioned already was that of copy-by-value objects. This seemingly small improvement is a potentially huge performance saver

    It would only be a performance saver if these objects are really small, because you'll be copying these objects around all over the place). I also have to wonder how C# deals with the object slicing issue. Object slicing is when you pass a subclass instance to something that takes the base class using pass-by-value, and you implicitly "slice" thje object down to the base class. It doesn't happen in Java (or Modula-3, or Objective-C, or any other language that always passes objects by reference). It does happen in C++ if you're not careful though, and it's really hideous.
  • I agree, C# does not provide much new. The only nice thing I could discover in it was the delegate function. The syntactic sugar in the rest of the language may also be convenient but won't save much time since it does not fundamentally change the way you model a program.

    Right now there are two things that I would like to see in a new OO language:
    - templates (not the crappy C++ version)
    - aspects (as in AspectJ)

    Both make it significantly easier to model certain problems. Especially aspects are really cool. Unfortunately C# provides neither which dooms it as obsolete even before it is finished.

    I don't think C# is a bad thing, I just think it is not a very big step forward (to small to be interesting).
  • True... "corporatism" was invented and employed to pad out Jon Katz articles (which generate web hits to sell ads); totally different.
  • I agree, the JIT does a great job for performance, but the gcj is more along the lines of what I'm looking for in a Win32 environment.

    Providing a self-contained executable, or a set of compiled files (a'la DLL and EXE) is much easier than making sure that a customer has a servicable JRE on his/her box. As it stands, dealing with distribution of Java programs is just as bad, if not worse, than handling VB programs. With VB all I had to worry about was providing the correct VBRUNxxx.dll, but with the recent (relatively, you must admit) switch to Java2 and the more recent addition of HotSpot in JDK1.3, things got complex.

    Distributing the complete JRE each time, Just In Case, isn't going to cut it. Yes, the support stuff is in JARs, and these can all be conditionally installed - but that's a bit more to worry about than with an old fashioned EXE. Also, the fact that invoking a Java program involves not only starting the interpreter/JIT, but also setting a CLASSPATH, makes things icky - at least until Java makes adequate inroads that a CLASSPATH can be presumed. Sure, setting up a batch to do this is fine, but we're just compounding assumptions at that point. A binary executable is a lot more workable when doling out software to non-programmers.


  • by YoJ ( 20860 )
    I think his point still stands, though. If you can declare some objects as not needing to be considered for garbage collection, you can't help but make the garbage collection faster.
  • After all, Java does support the final attribute on both methods and classes, and JVM's like HotSpot are perfectly capable of doing extremely aggressive inlining as necessary.

    Since Java is a late binding language, you do have to have some intelligence in the JVM to optimize the method resolution when new classes are loaded and compiled, so you still have to pay the link penalty for both virtual and non-virtual Java methods at the time that a class is loaded, but Sun's JVM on Solaris, Windows, and Linux will do the code rewriting during execution to translate final method calls to direct calls without any vtable. I believe it may even be able to optimize virtual methods to direct calls in circumstances where the execution history forces the object pointer to be of a specific type, as in the case where you have a line of code that creates an object of a given type and the next line you call a method on it.. since HotSpot could reasonably tell that at that point in the code that object reference will always be of the specific subtype that was just created, it would be able to just do a direct jump to the method.

    I'm speculating here somewhat.. HotSpot may not be quite that smart, but Sun's comments strongly suggest that it does do that sort of path analysis for heavily used code segments.

  • Well, I work in one of those shops that has gone totally OO and is better for it. We are a financial institution and our code involves rigorous manipulation of financial data. This system was implemented in C and C++. We hired a real OO architect, converted to java, completed team designs and the result is a much better system. Since then, I have worked on other projects that utilize OOA/OOD and implement in Java. The result is faster implementation, fewer bugs, and a better translation of the business logic into code.

    I was hoping you would provide some evidence for your bias against OO. What were the problems (not anecdotes) that led to your belief?

    As far as your comment on OO training and relevance to the real world, I agree. Much of our success is derived from the combination of well educated and experienced engineers (I am one of the most junior engineers with 5+ years as software engineer), an experienced chief architect (who has implemented dozens of OO systems), and old fashioned elbow grease! Let me recommend one OOA/OOD book that actually has real world problems and life cycles. Applying UML and Patterns by Larman. Check it out, you may find it more useful than much of the academic OO drivel on the market.

    Finally, I am not one of those folks who thinks OO is a silver bullet. I happen to model in the Relational Database arena often to know that OO and RDBMS are not a clean mix. I was as skeptical as you - until we designed and implemented a system with so few problems that it was shocking. These experiences taught me that OO is meritable and has too many positive ideas to throw out with the bath water.

    Later.

    --

  • by SuperKendall ( 25149 ) on Wednesday August 09, 2000 @12:11PM (#867006)
    In all talks about C#, IL is really the key - it's the intermediate language that all languages going to the .Net platform (including C#) get compiled into, before they are JITted on the target machine and run alongside the CRL (a support library).

    Why is IL the key? Consider:

    They are going to submit C# the language as a standard - but I don't think that includes IL. That means that even if you make a C# compiler based on the standard, they could change how IL is structured to shut you down.

    They have stated IL will be compiled to native code in one pass. That can happen before it's deployed, or on the target platform. But by doing that, they loose the possibiliy of dynamic optmization (one of the things that makes HotSpot fast, and better than just a straight JIT). By allowing the compilation to happen before deployment, you also risk a bad choice for target platforms and possibly reduced performance of distributed components.

    It effects all other languages. Using Visual Studio, pretty much all languages will compile into IL. That means the workings of IL affect your code to some degree, really regardless of language.

    C# is an interesting language, and I like some of the features - but for all that, would it be impossible to compile C# to Java bytecode? I don't know the answer to that myself for certain, but really the development and capabilites of IL as a platform are really more interesting to watch than whatever language is on top.

    Another interesting question to consider - C# allows you to have native (unprotected) code blocks. How does that work in relation to IL? Does the code get bundled with the IL, to be compiled when the IL is compiled? Or are the native parts compiled to native code when the other code is compiled to IL, and transported as a mix of IL and native code? The answer has some implications for optmization of native code blocks.
  • by wilkinsm ( 13507 ) on Wednesday August 09, 2000 @02:36PM (#867013)
    <I>...and C# relies heavily on Microsoft's Component Object Model...</I>

    Which is probably how most of this functionality (encapulation, events and call-backs) will be implemented. I'm getting the sense this is going to turn out to be something of a quazi-language, which was in the end what Microsoft's Java implementation became (Just try and do something meaningful in it without invoking a COM object.)

    In the end, C# really does not seem to offer anything meaningful that VB does (or will) not, and for the same reasons will not be any less portable.

  • by HeghmoH ( 13204 ) on Wednesday August 09, 2000 @09:24AM (#867014) Homepage Journal
    With Objective-C, you can have pointers to objects and not have even the faintest clue what kind they are, yet you can still call their methods. I'm sure many proper OO languages support what you're looking for as well.
  • With the similarities between C# and Java, I worry less about language lock-in than reliance on a set of system libraries and object models. In this case, the C# focus on COM objects would prove to be a greater barrier to porting a lot of code to another platform, not the basic language syntax. Add the new shared .NET libraries to the mix, and suddenly, you have mission-critical applications being developed in a completely platform, vendor, and version-dependent environment.

    Java may not be the ultra-portable platform it originally claimed to be, but at least companies who develop with it are not signing their eternal soul (and support contracts) away to a single vendor. If you start down the road of .NET, you are now committing not just your desktop applications and documents, but all your business logic and data, to the benevolent guidance of Microsoft.

    The silver lining to all this is the fact that there will be those business analyst-types who will realize the same thing, and say so.

  • by JackDangers ( 66689 ) on Wednesday August 09, 2000 @09:27AM (#867023)
    Some friends at work recently got back from a MS Developers Conference where they handed out CD's with Visual Studio 7.0 beta and a full version of Windows 2000 Professional (since Visual Studio 7.0 will only run on Windows 2000 Pro.) I loaded it up, read throught the C# book that was included, and was impressed.

    C# is highly typed, so you don't spend hours looking through code trying to find a type mismatch.

    It is early binding instead of late binding, meaning it is quicker! With Java (late binding), a file search and enumeration of 8000 files on our servers here at work took an hour and a half, and 50000 files with a C(early binding) app took 4 minutes, so C# takes the best of both. Also, because it is early binding, you don't have to worry about references to non-existant objects, when you are using DLL's for instance. C# automatically loads and reviews the routines contained in a DLL automatically, before compile, so a reference to myDLL. will bringup a popup list of the routines availible in that DLL.

    Very cool stuff! It will be interesting to see if it takes over as the new, trendy programming language of 2000/2001, as Java has been for a few years.
  • by __aasmho4525 ( 13306 ) on Wednesday August 09, 2000 @08:06PM (#867031)
    as one who has written about as much software as i've had to maintain other peoples' i can without a doubt say that the software that is well engineered in *either* paradigm is easily maintained (while the converse is clearly true).

    now, where it got interesting was when we actually examined the software engineered by novices. the o.o. paradigm forced more thought to be placed into the structure of the application's design, thus typically resulted in easier to maintain software.

    that software written in c, cobol, basic, pascal, you name it, that was easy to maintain was only that written by the really experienced. the novices in the crowd made our lives very painful. my experience seems to show me that o.o. languages are less lenient toward rush-jobs at design-time.

    just my 0.02

    Peter
  • by ucblockhead ( 63650 ) on Wednesday August 09, 2000 @09:33AM (#867040) Homepage Journal
    I find the concept of "every object is a COM object" worrisome because COM objects are anything but lightweight. Often, as a C++ programmer, I find a problem best attacked with small, lightweight classes, often entirely inline. They help keep the code simple while essentially compiling down to nothing. If every object is a COM object, that ability goes away. That small, lightweight class all of a sudden has all of this overhead to do things that I don't really care about for this particulr problem.

    Which gets to a more theoretical problem. The purpose of COM, and models like it, is fairly specific. It is interoperability between seperate running programs, either locally or across a network. But who says I want to share every single object in my program with the outside world? What's the point in having a string class that could potentially be shared between programs if I've got no need to share it between programs?

    It seems to me that people are going to find that to get C# programs to perform acceptably, they are going to have to design with big, heavy kitchen sink classes. And that worries me because that sort of design is, in my opinion, one of the biggest downfalls of most Windows software. (Especially Microsoft APIs.) I'm sick of having to instiatiate five classes and code a hundred lines of code just to find out if the damn CD player is in the "playing" state.

    It seems to me that this is a case of having a hammer (COM) and seeing every problem as a nail.

    If I were designing C#, I wouldn't make every object a COM object. Instead, I'd have some kind of "COMmable" attribute that could make some objects COM objects with little fuss. Put the control in the hands of the programmer.
  • by Tom7 ( 102298 ) on Wednesday August 09, 2000 @09:37AM (#867068) Homepage Journal
    ML has an excellent implementation of parametric polymorphism (sometimes thought of as "templates"). You can define a function that counts the elements in a list of anythings:

    fun length nil = 0
    | length (h::t) = 1 + (length t)

    which has type: 'a list -> int
    (meaning the function takes a list of anything, and returns an integer).

    Through the mechanism called "functors", you can specialize a generic structure (say "sets", or "mappings", or "arrays") with some types and operations to create a new type. Signatures let you make these types truly abstract (paired with type safety, a very powerful notion).

    All of this is type safe (with proofs). Most of it is accomplished statically too, so there's little run-time overhead. It is indeed scheme with "some work".
  • by EAG ( 166855 ) on Wednesday August 09, 2000 @09:40AM (#867077)
    Attributes aren't really the C# version of reflection; reflection lets you look at the normal things you'd expect it to, and it also lets you look for attributes.

    Why should you care?

    Well, attributes are really useful in cases where you want to pass some information about the class somewhere else but you don't want to make it part of the code.

    With attributes, for example, you can specify how a class should be persisted to XML.

    [XmlRoot("Order", Namespace="urn:acme.b2b-schema.v1")]
    public class PurchaseOrder
    {
    [XmlElement("shipTo")] public Address ShipTo;
    [XmlElement("billTo")] public Address BillTo;
    [XmlElement("comment")] public string Comment;
    [XmlElement("items")] public Item[] Items;
    [XmlAttribute("date")] public DateTime OrderDate;
    }

    At runtime, the XML serializer looks for those attributes on the object it's trying to serialize, and uses them to control how it works.

    You can also use attributes to communicate marshalling information, security information, etc.

    The nice thing about attributes is that it's a common mechanism, and it's extensible, so you don't have to invent some new mechanism to do something similar.

    Or, to look at it another way, attributes are just a general mechanism for getting information into the metadata.
  • by Black Parrot ( 19622 ) on Wednesday August 09, 2000 @12:54PM (#867081)
    > what does this C# thing add? I saw no compelling features in it that Obj-C doesn't already provide.

    It provides a Java-proof firewall around Microsoft's market share.

    --
  • by odysseus_complex ( 79966 ) on Wednesday August 09, 2000 @09:44AM (#867091)
    I would also like to point out a fallacy in the article. The author states that, "Java uses mark-and-sweep in order to garbage collect," which is incorrect. The JVM spec does not specify which garbage collection algorithm is to be used, much that one must be used. So, the gc could be mark-and-sweep, copying, generational, or reference counted. Heck, it could even be an incremental collector, in which case overall application timing becomes a moot point.

    So, in this case, the analysis of time-to-collect is misplaced unless there is reference to a specfic VM.

    I recommend reading the VM spec. I've read it 5 or 6 times myself.

  • by MassacrE ( 763 ) on Wednesday August 09, 2000 @04:31PM (#867100)
    OOD is incredibly useful, but there are several types of projects which it fails for:

    - projects which are one-shot, or which there is no interest in maintaining. For these, why do design at all? if it works enough for you to use, and you don't have to worry about adding features or fixing bugs other people have, just finish the damn thing and go home. Perl is great for this.

    - projects in which the customer has no clue what they want. I've been using design principles for a while, my current customer has no earthly clue what they want, but definately hope that I can completely change everything around at the last minute when I show them a working program based on my own ideas of how it should be designed. They literally won't pay me to do design specs. The code is rapidly approaching 100k lines, and I have no idea how the massive 1 ton block of spaghetti will be maintained in the future. There are parts of an old program with 25-page-long routines pasted in, and I have been told if I edit those routines I will be fired on the spot.

    - single-developer projects (or even two/three developer projects) don't really need OOD and documentation as much as team or multiple-team projects. It can still hit you hard right around 40-60k lines (depending just how good you are), when you realize that you don't remember how the whole thing works anymore. After about 80k lines, you will spend more time communicating how things work and bringing new developers up to speed than you will coding - basically you aren't done with the program and already you are maintaining it.

    If you are writing small,simple pieces of software, OOD is a joke. The reason isn't that OOD doesn't scale down, it is because the problem is so simple you designed it in your head. The end product is so simple that you can look at it for five minutes and understand what is going on. But as soon as you can't think about all the issues someone will face in the project by looking at the problem, as soon as you can't be sure what steps to take to get from A to B, you need OOD.
  • by Dawn Keyhotie ( 3145 ) on Wednesday August 09, 2000 @09:46AM (#867101)
    Well, I'm glad I'm not the only person getting tired of having the (preferred) pronunciation given with every single mention of the languages name: C# (pronounced see sharp). Ugh. How about Java (pronounced Java), C (pronounced see), Eiffel (eye full), Pascal (pascal), perl (perl). No explanation needed there!

    Anytime you come up with a name where you have to explain the pronunciation every time you use it, you know that its a real lamer. Its a dead give-away that you worked too hard, stayed up too late, and got too cute.

    Of course, there _are_ lots of ways to say C#. I always think 'hash' everytime I see '#', and if you use the hard sound for the letter C, you get K-hash, or simple Cash. Which I think fits, considering the orifice from which it issued.

    Cheers!

  • by vyesue ( 76216 ) on Wednesday August 09, 2000 @09:46AM (#867104)
    > C# is highly typed, so you don't spend hours
    > looking through code trying to find a type
    > mismatch.

    Client.java:43: Incompatible type for declaration. Can't convert Context to Community.
    Community context = (Context)contextE.nextElement();

    yeah, that took a couple hours to track down. phew!
  • by Tom7 ( 102298 ) on Wednesday August 09, 2000 @09:47AM (#867114) Homepage Journal

    I think it's an unfortunate misconception that C is the ultimate "power language" because it gives you so much "control". There are lots of advanced languages around (Eiffel, Haskell, ML come to mind) which are more powerful than C precisely because of their restrictions. (You can reason about your programs more precisely since you know certain behavior is impossible). Java has some of this (safety, at least) and I think that makes it better for most applications than C++.

    I'll agree with you here, though: aside from a few cosmetic improvements, C# is not really any better than Java.
  • by jetson123 ( 13128 ) on Wednesday August 09, 2000 @05:04PM (#867125)
    I see nothing in C# that would make it perform any better than Java. The introduction of objects with value semantics perhaps comes closest, but similar efforts are underway for Java (and can be implemented without any significant changes to the Java language and none to the Java byte codes).

    Java has an enormous number of man-years poured into its design, library, bug fixes, and various implementations. Issues like safety, sandboxing, security, and reflection aren't even addressed by C#. A complete set of cross-platform libraries is also not a goal of C#, while Java actually delivers, and delivers pretty well. And people are working hard at adding genericity and features for high performance computing (JavaGrande) to Java.

    If Windows programmers switch in droves from C++ to C# that would be great: as far as it's defined, C# is almost indistiguishable from Java, and it would elevate the quality of software on Windows platforms. But, at this point, C# looks like a dud to me: it's late, it's non-standard, has no user community, and it doesn't even promise to offer any compelling advantages over Java.

  • by Acy James Stapp ( 1005 ) on Wednesday August 09, 2000 @09:49AM (#867138)
    I think you are a bit confused about early/late binding. Early binding is the process of determining the address of the function to call (given it's name) at compile, link, or load time. This happens once, so it is fast. Late binding is the process of determining the address of the function to call each time it is called. In C++, late binding is used for virtual functions, and early binding for non-virtual functions.
    There are optimizations that enable late-binding to be more efficient (such as a vtable). Java implementations should be able to determine (via the final attribute) that a function can be early-bound instead of late-bound. There are also C++ compilers that can eliminate virtual function calls by doing link-time program analysis.
  • by lordpixel ( 22352 ) on Wednesday August 09, 2000 @10:04AM (#867149) Homepage
    C++ uses a mixture of early and late binding. So does Java. I'd be stunned if the same wasn't true of C#. Late binding is the only way to do some things. Perhaps C# does more early binding than Java.

    Then there's the question of Java's Hotspot, which can remove the late binding overhead in Java at runtime once its optimised a piece of code. It can even de-optimise and switch back to late binding then re-optimise if you starty doing dynamic class loading. This is a big and complex subject, and I'm not sure I could explain it here even if I had the time...

    I submit the most likely reason for the performance of your Java program is that it wasn't as well written as the C++ one. Binding is a tiny tiny part of the difference between the 2 languages. There any many other factors which are more likely to account for the difference.

    As for the DLL thing - that sounds great - except whe nyou come to deploy your application the user doesn't have the same version of the myDLL that you did on you super dev box. Hence the user gets a reference to a non existant object. Early binding really has nothing to do with this. It sounds like its being used to implement a nice feature in the IDE, not a solution to the age old library versioning problem that people are discussing in the Lets Make Unix not suck thread
    Lord Pixel - The cat who walks through walls
  • by J.Random Hacker ( 51634 ) on Wednesday August 09, 2000 @10:04AM (#867151)
    As to why compilers abandoned generating C, I can answer that readily. Two things come immediately to mind.

    (1) The ability to do source level debugging is nearly completely lost in translation from C++ to C.

    (2) It is much easier to optimize code when you are going from a symbolic representation to RTL or some other near-machine-level representation than it is when going from symbolic code to some other symbolic code with uncertain semantics. While the translation is *possible* the results are not elegant, efficient, or readable.
  • by _prime ( 181525 ) on Wednesday August 09, 2000 @09:53AM (#867172)

    I recently went to a brief presentation on C#, done by some Comp Sci folks just back from the MS developer conference.

    A few points I recall:

    • It does require a virtual machine. It forms a layer of abstraction called the "Common Language Runtime." The name of the VM itself is "NGWS". Performance questions led to answers along the lines of it being comparible to Java; there were indications that MS engineers believed it would eventually run faster than a compiled language due to the virtual machine's ability to optimize code execution for individual processors and systems.
    • They had a look "under the hood" of the Virtual Machine only to discover that it looked *strangely* just like MS's Java VM. Apparently they changed the variable/function names but the programmer who was taking a look said the code itself looked the same. They commented that they could actually run Java code on the system without problems, providing it didn't refer to any of the special Java class libraries.
    • Visual Studio 7 (?) would be known as Visual Studio .NET and would feature C# and the VM. They remarked that the beta release they received was quite stable.
    • The VM would run on everything from Win98 up (not sure about 95).
    • They went into some detail on MS's new strategy. Basically they are hoping to capture the Middle Tier market with C# services running on the VM (NGWS), accessible remotely via SOAP. On top would be ASP+ (internet) and Win Forms (enterprise).

    Someone asked why we need another language, especially one so close to Java. The presenter(s) explained that MS basically wanted to offer a VM based Java-like language, but was unable to add their own extensions to Java fit in with their new strategy (remember the lawsuit from Sun?). They remarked that perhaps Sun made a mistake in their desire to keep MS from making non-standard alterations to the MS implementation of the Java VM. MS, as usual, just went ahead and created their own new standard. Now we have another language to pull developers away from Java.

  • by Edward Kmett ( 123105 ) on Wednesday August 09, 2000 @10:09AM (#867176) Homepage
    I dislike the syntax used for C#'s structs. Since it is identical to the syntax used to access a class, you cannot identify without digging up the location of the source definition of the object whether or not you are passing by value or reference into the function.

    my_int bar(my_int x) {
    x+=1;
    return x;
    }
    my_int foo() {
    my_int y = 2;
    bar(y);
    return bar(y);
    }
    foo returns different results depending on whether or not my_int is a struct or a class. You cannot tell if modifying the object will modify the original or not without looking for the original class definition which can be buried pretty much anywhere in c#, because it has abandoned the rigid formalism that Java used of one public class per file and the name of the file matching the class.

    Mind you, Microsoft will probably sell a nice Visual Studio plugin which looks up the source definition and shows you if its a class or struct, and which allows you to hyperlink to the definition (witness MSVC and VB), but I'm examining the language, not Microsoft's tools.

    I disagree with the author of this article about the presence of Attributes. Attributes (as ugly as they are) create an avenue to extend the metainformation provided by the language. Since the attributes reside at the class rather than instance level, the glut of their presence is not intolerable. Their presence is necessary to fully specify COM parameters, they can act as a reflection tie to documentation, provide editor bindings to source code, etc.. in one place, rather than java's comparatively hackish javadoc approach of differentiating /* and /**. Its new, its different, but I can accept it and already see uses for it.

    However, I do not see the need for Events to be a language level construct. With the introduction of generics/templates they could be implemented as a generator/listener template with a common superclass and subscribers tracked in a static list per event template used. No extra nomenclature need be added to the language to describe what is just another pattern afterall. In the absence of generics I suppose they did the best they could, but its another case of pandering to their present programming paradigm (pardon the alliteration).

    I do miss the presence of java's inner classes and either c++'s templates or some other form of constrained generics (can anyone say Eiffel?).

    In fact the lack of any form of generic will probably keep me from using the language as I am a pattern junky, and templates/generics are key to avoiding a lot of cut&paste code when using similar patterns heavily.

  • by efuseekay ( 138418 ) on Wednesday August 09, 2000 @09:55AM (#867177)
    Despite the success of C#, programmers began to complained that though C# provides much of the functionality of Java with the flexibility of C++, there exists a middle ground between C++ and C#.

    C#++ is desiged to fill that middle ground.

    CaptChalupa, a MicroSlash programmer, said, "I love C++, and always programmed in C++. But the lure of the C#, which my colleagues have raved all over, is tugging me. The reason I don't want to abandon C++ is because I still like to keep the flexibility of collecting my own garbage. C#++ is the answer that may just finally pull me away from C++."

    Meanwhile, in another development, Microsoft Applications Inc. has announced the development of a new language called "C##".....

  • by askheaves ( 207302 ) on Wednesday August 09, 2000 @10:09AM (#867180)
    Since the first day I heard about this language, I have been excited about it. The more that I read about it, the more I can't wait to see that pretty little brown MSDN box come in the mail with my copy of Vis Studio 7.

    As a Microsoft-whore, the ability to develop with the new tools of VS7 (which, BTW, features C#, VC++, Managed VC++, and VB all running in the same IDE with concurrent, multilanguage debugging... baby!) has taken over. I am working on a project that requires two WinCE portions and a data management system. The things I've read have made me make the conscious decision to do no code development on the data management portion until VS7 is in my hand and on my machine.

    This month's Visual C++ Developers Journal has a cover to cover exposé on VS7. This is a link to the online article.

    http://www.devx.com/upload/free/features/vcdj/2000 /08aug00/bn0008/bn0008-1.asp



    "Blue Elf has destroyed the food!"
  • by cybercuzco ( 100904 ) on Wednesday August 09, 2000 @09:56AM (#867183) Homepage Journal
    and youd have to be smoking hash to code in it.

  • by Anonymous Coward on Wednesday August 09, 2000 @10:13AM (#867206)
    speaking of compilers, wouldn't it be funny if Sun released a compiler for C# that didn't follow the language specifications exactly as Micrsoft has detailed them?

    I wonder who would win that court battle...
  • by stevens ( 84346 ) on Wednesday August 09, 2000 @10:03AM (#867207) Homepage
    Quoth the poster:
    ...C's hideous syntax...
    ...and...
    I wrote a C/C++ preprocessor that lets you use Python-style whitespace-significant code, called Cugar to escape the ugliness of C.

    I shudder when I think of programming with significant whitespace. It's what has kept me from picking up Python in earnest--at least until I write a conversion program that turns braces into the whitespace that python likes (in Perl :-).

    To think that C would be made *better* by the addition of significant whitespace gives me the chills.

    Steve
  • by rafial ( 4671 ) on Wednesday August 09, 2000 @11:07AM (#867220) Homepage

    It would be okay for the compiler to generate a warning for incorrectly-indented code, but to generate incorrect code instead is simply inexcusable.

    Depends on your definition of correctness. ;) If you wrote:

    if( myCondition) {
    doThingOne();
    doThingTwo();
    }

    On a quick visual inspection of the code, I'd assume that doThingTwo() was outside of the if clause, and if you wrote:

    if( myCondition)
    doThingOne();
    doThingTwo();

    I'd definately assume doThingTwo() was inside the if until I looked closer to notice the braces were missing. In this case, the "correct" code would be very suprising.

    In python, you don't have these sorts of suprises, your block structure is immediately obvious from indentation, and if it looks wrong, it is wrong. I'd call that correct code generation...

    I'll admit I was a little put off by this feature of python at first, but as soon as I started working with it for awhile, it just seemed natural. Now in my Java programming I always get annoyed when I have to spend time balancing my braces!

    Also, I used to do alot of work in perl with a guy who never bothered to indent his code consistently (after all, its the braces that define the block structure) and at least once a week, he would call me over to look at some bug that became obvious as soon as I reformatted the code. And applying the pragmatic programming principle of DRY (Don't Repeat Yourself), what is the point of having the same semantic information encoded in the formatting (where it is visible to the programmer) and brace structure (where it is visible to the compiler). If you've got the same information in two places, eventually they'll get out of sync, and you'll lose...

  • by Snocone ( 158524 ) on Wednesday August 09, 2000 @10:24AM (#867233) Homepage
    That's cool; is there some sort of method naming convention?

    That problem is addressed through the use of protocols. Some are formal, like the reference counting protocol for instance (implementing them is known as "adopting" that protocol), informal protocols can be defined at will. Note that this also gives you basically all the design capabilities of C++'s multiple inheritance with none of the associated problems.

    "Protocols free method declarations from dependency on the class hierarchy, so they can be used in ways that classes and categories cannot. Protocols list methods that are (or may be) implemented somewhere, but the identity of the class that implements them is not of interest. What is of interest is whether or not a particular class conforms to the protocol--whether it has implementations of the methods the protocol declares. Thus objects can be grouped into types not just on the basis of similarities due to the fact that they inherit from the same class, but also on the basis of their similarity in conforming to the same protocol. Classes in unrelated branches of the inheritance hierarchy might be typed alike because they conform to the same protocol."

    -- Object-Oriented Programming and the Objective-C Language [apple.com], p.99.

    What I like about Scheme is that you can query the datatype to see what it is,

    In Obj-C you can ask an object what it is, if what it is is a kind of some of other thing, if it responds to a given message (note that through the use of categories, this capability may be added at runtime), conforms to a given protocol, yadayadayada.

    On the other side of the fence, Java has a type for everything, and is correspondingly complex, too.

    Obj-C is, well, C. You add just as much or little complexity as you wish.

IF I HAD A MINE SHAFT, I don't think I would just abandon it. There's got to be a better way. -- Jack Handley, The New Mexican, 1988.

Working...