Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

The Object Oriented Hype 730

bedel231 sent us a fairly lengthy and detailed article discussing the hype surrounding Object Oriented Programming and tries to claim that a lot of things taken for granted are mythical. Personally, I've never been a huge fan of OOP, so I tend to agree on a lot of these points. Regardless of your stance on the issue, it's worth a read.
This discussion has been archived. No new comments can be posted.

The Object Oriented Hype

Comments Filter:
  • No kidding... it's been my experience that code is almost ALWAYS rewritten to jive better with the specific program environment.

    The problem is that people tend to only code methods which are needed for their particular use of the object in question. And why shouldn't they? It makes for leaner code. It also makes for horrible reusability, because later when you try to the use the same object for a different application, you generally have to get under the hood and rip away at the guts to refashion it for YOUR use. This of course completely sidesteps the whole point of encapsulation and data hiding in the first place.

  • BlackStar, very impressive post.

    Of course, you only gave two solid examples, admittedly from two of the best known and most respected men in the field, but still only two.

    Even with a third, I'm sure Bryce wouldn't back down.

  • one common misconception is that one can not do object oriented design in C, or any language that isn't approved by the OOP zealots...

    one can create objects in C by...


    Yes. But by doing that you're sacrificing the compiler support.

    And IMHO it's the wroing parts of OOP that get the hype. Encapsulation and strong type checking rate higher for me than code re-use or some aspects of inheritance (though polymorphism is right up there).

    One big thing about languages like C++ is that they give you a way to express your intent regarding encapsulation and data hiding to the compiler and related tools, which then check them for you. (I'd say "enforce", but C++ also has ways to express your intent to deliberately violate the normal boundaries.)

    It's really instructive to look at the "things to check for in a walkthrough" section of Miller's book on software quality. There are four pages of essentially one-liners describing the easy errors to make in languages like Fortran or Cobol. And with C++ they're vitrually all impossible to generate without producing, at a minimum, a compiler warning.
  • I think, far and away, the best part of this article, is the part where he takes vague numbers from studies of large (2+ year before initial shipment) development efforts and tries to apply them to products that can go through 10 whole number versions and get abandoned by the roadside in 2 years. The slowdown he talks about in the beginning of an OO dev cycle is partially proportional to the length of the dev cycle overall - NOBODY spends 3 years developing objects for a 3 month project, no matter how fanatical their devotion to OO.
  • Sounds like you've done mostly encapsulation, maybe a bit of inheritance is natural too. Probably not polymorphism though.

    ...As if the world owes the concept of interface to OOP.

    The concept of a wall socket is very old. It's so old, it's even present in UNIX. Look in your /dev/, and the interface that your devices are programmed to. The author of the web page is right; OOP is claiming a lot of inventions that are not it's own...

    It's too bad. I wish more computer scientists look at computer science history, rather than the "Learn SQL, C++, and make a bunch of money" book shelf.

  • As someone who has done the exact same kind of programming that the author of the article talks about: small to medium software packages for business, mostly in-house, custom stuff to get the job done. I can sympathize.

    However, I don't think his real issue is with OOP, but with general programming practice in that sector. None of the developers with whom I worked took the time to actually design anything. They jumped in with their first idea and hammered at it until it "worked." They didn't care if there was a "better" design or a better way to do something, they just wanted it to work and for the customer (i.e. some other division of the company) to be happy with the software. If it came back for bug fixes later, oh well, that's what keeps us employed.

    The real problem is not with OOP itself, but with the kinds of OOP tools (often linked to some RAD package) that are heavily used in this area and with the fact that so few of the programmers have the time to actual do design or code review. The managers were constantly telling us to review each other's code and to design everything, but no one ever did, and what passed for a group code review was a joke. No one actually looked at code. The lead programmer, often the only programmer, for the project stood up and gave a "best case" (i.e. outright lie) description of how the code works. Later, when some poor schmuck (i.e. you) has to maintain that very same code (which you were not involved in designing or coding in the first place) you find out just what a mess the code really is. The software more often than not did not even begin to approach the design discussed at the "code review."

    Nah, the problem isn't with OOP, 'cause I have personally seen massive improvements in my coding since I've started using an OOP style with thorough documentation and formal notation. The problem is that in the very sector the author is complaining about, the vast majority of programmers are mediocre, and the managers (who don't do the work, but pick the tools and "methodology") are even worse. The problem is with buzzword programming where you pick tools because they are compliant with all the latest buzzwords, not because they are right for the job at hand, nor because your programmers know how to use them, which the majority don't.

    Some folks need to go back to school!
  • Oops - I was wrong. I forgot about the one part of the article that's more insightful than any comment about programming ever: "In addition, the OOP community is strongly divided on whether the "component" model or the "inheritance" model is the way to go. Microsoft seams to favor the component model. The jury is still out on this one. Without getting into the details of this battle, let's just say OOP still has some growing up to do." Holy fuck! There might be more than one right way to do things! ABANDON FUCKING SHIP!
  • by jon_c ( 100593 ) on Tuesday January 09, 2001 @08:25AM (#520232) Homepage
    one common misconception is that one can not do object oriented design in C, or any language that isn't approved by the OOP zealots. this is just not true, while it may be more natural to write a good object oriented design in C++, Java or Smalltalk. it can also be done in C or BASIC.

    one can create objects in C by creating a structure, then passing that structure to every method that performs on that structure. a common use could be something like this.

    struct window_t win;
    window_init(&win)
    window_draw(&win);
    window_destroy(&win);

    it is also possible to perform polymorphism and inheritance with function pointers and other techniques.

    -Jon
  • > Pray tell, how does run-time instantiation have anything to do with either encapsulation or abstraction?

    Simple. You can do abstraction in anything, even turing machines. Encapsulation is done in objects, otherwise you don't have encapsulation so much as namespaces. Not much encapsulation going on if you only have one capsule, neh?

    --

  • I've always liked that saying (in various wordings):
    "If you give someone a hammer, problems will all start to look like nails to him."

    To really drive the point home, I like to add:
    "...and what would he do if given a screw?"

    There's also a response that I came up with once when a friend and I were talking about the coolness and power of some particularly cool and powerful Lisp-related thing (lest my previous post and/or my sig make me seem like just an old-fashioned C hacker):
    "A sufficiently powerful hammer can turn any problem into a nail."

    David Gould
  • I gave up and decided to write in Python. It's still kinda complicated, but at least that's just because the code is doing something complicated.

  • Think about it a minute. What does a C++ compiler do? It translates the (high-level) C++ code to (low-level) assembly code.

    Actually, it's worth noting that the original C++ implementation was a precompiler - it translated C++ to C. (I believe that this would be very difficult or impossible to do with today's C++.)

    Can you do object-oriented stuff in C? Yes. It's often hideous, though. Motif was one such attempt. The horror...the horror....

    The key attributes to a well-structured program (OO or not) are encapsulation and abstraction.

    Can you get these in C? Yes! File scoping is the most underappreciated feature of C; it's often negelected because of poor revision control systems that end up encouraging a one function, one file paradigm. When you use it properly, though, you can put code and data members in a package with well-defined interfaces and the option of private data members. It rocks.

    Can you screw things up by inappropriate use of OO strategies? Sure. Object-obfuscated designs with spaghetti inheritance are common, and are usually caused by becoming infatuated with inheritance and polymorphism at the expense of encapsulation and abstraction.

    Tom Swiss | the infamous tms | http://www.infamous.net/

  • Operator overloading is a C++ feature, not an OO feature.

    But it's not the global shift operator that's been overloaded, it's the shift operator associated with cout. Although it's not an oo feature as such, it is dependent on it. I.e. you're effectively doing cout.print("hello world").print(endl);

    Rich

  • For example, I claim that the following program is legal C++ code:

    cout However, I wouldn't dream of claiming that the program is "object-oriented" simply because C++ is perceived to be an "object-oriented" language.

    You know what? cout is an object oriented function! You can send it strings, characters, integers, and floating point variables without having to indicate to the complier what you are sending, as you do when using printf() in C. Try a better example. In fact, since C++ is a superset of C, I would submit that THIS is a truly non-OOPed C++ program:

    int main() {
    printf("%s\n","hello world!");
    }

    Now, THAT would compile with g++ and doesn't use anything OOP.


  • I did, just maybe not carefully enough; I guess you did say that, after all. You gave the classic "Draw() method for abstract class Shape with concrete subclasses Circle and Square" example, and claimed at the top that it was something that "With the structure+procedure approach, [...] would have to be [a switch statement]" Following that, the last sentence seemed to be claiming polymorphism as a virtue of OOP over C. Rereading it, though, I see that it sounds more like you are acknowledging that polymorphism can be done without explicit language support, which was also my point. If that's right, then sorry.

    Mainly, I was looking for a good context in which to make the point of my last two paragraphs, namely that OOP is a bigger concept than a feature set of any language. In fact, I still have trouble with your initial argument that:

    According to your definition there's no difference between procedural programming with good practices such as data encapsulation, and OOP. But there IS a difference... an object is self-contained.

    We've already covered methods (explicit and otherwise), including polymorphism, and you specifically mention "good practices such as data encapsulation", so what else does "self-contained" mean? Will it still be anything that a good C programmer doesn't (or couldn't) already do anyway?

    What I'm trying to say is that programming in an OO frame of mind does not necessarily require any particular language, language construct, or even programming technique (see my claim that even the version with the switch statement is in a sense OO).

    David Gould
  • There is some truth in this, in the general case. When something is easy to use, you don't need to be a professional to use it. And when you're not a professional you will in general make a less-than-professional job of it, even if it looks OK to you --- you just don't have the background to judge good from bad.
  • On the other hand, Perl powers Slashdot, so I guess this is the place for the procedural approach to have its message heard. In fact, Slashdot totally vindicates the article: its non-OOP approach is fast, effective, efficient, and easy-to-understand. Highly scalable and expandable as well. I especially like Slash.pm (aka THE BEAST) and the my_conf{shit} variable. I'm sure the non-OOP approach will really take off once everyone switches over from C++/Java to Perl.

    I wouldn't put down those who write in Perl that much as being anti-OOP. There are many, many of us who write and like OOP in Perl (it's especially useful to read Damian Conway's "Object Oriented Perl" to see the beauty of Perl's OOP.)

    OOP in Perl, actually, is much more fun than other languages because of it's flexibility. For instance, Class::MethodMaker is capable of creating accessor methods for various types of data members of objects at compile-time; this is a beautiful thing. Perl's flexibility has a whole lot of other things that allow fantastic OOP, and "Object Oriented Perl" expounds on much of this.

  • You're soaking in it, and probably have been for years!

    The chief objective of computer programming is too create useful things. Elegance is tremendously valuable but secondary. NO OTHER interactive message board has come close to Slashdot in terms of useability, readability and even whiz-bang features.

  • How small is a small business app?

    Didn't the author say that this is a question that needs study?

    We have both enterprise-scale and small-business scale versions, but neither would survive long as a procedural app.

    And your still a software house building apps for other businesses. The software you produce is your product, so you expect to extend and improve upon it. You don't expect to throw it away at the end of the quarter/project/fiscal year. It wouldn't last long as a procedural app, but the authors point was that most of the software he has been commissioned to produce doesn't last long...period.

    Moreover, the enterprise "niche" is the greater part of the software market.

    Your point being...?

    There's no reason to object-orient the shitty little linux apps that you use, since they are usually 1-2 wannabe code monkeys writing bad C++.

    OK. A personal attack. So if someone disagrees with you then their applications are shitty and they are code monkeys. Or is it that when someone buys the hype and uses the it-applies-to-everything sledge hammer to build a picture frame and ends up with a mess, then they are wannabe carpenters. Oh, and by the way, Chevy's suck.

    Any multi-employee project requires the kind of abstraction that only OOP provides.

    So you're saying that there are no large projects except those using OOP techniques?

    I read as much of the article as I could stomach before his baseless graphs and lack of data made me stop, but I saw enough to know an Epsilon Minus Semi-Moron trying to sound academic.

    You couldn't stomach what he had to say because it challenged your close minded view of the world. The author didn't present data, because (as he repeatedly pointed out) there is no data to present. OOP is hyped as the next big thing, but no one has studied the parameters of when, why, and where it is successful in any meaningful way. He stessed at the start of the article that he was presenting his view of the world and where he was oriented in order to have that view. His article was a data point and asked questions. Scientist and academics don't just present data, they also gather, analyze and question data and assumptions. Assholes and idiots, on the other hand, resort to name calling and flaming when their ideas of their own superiority is called into question.

    It was a pathetic article, and anyone with a college degree and a smidgen of programming experience would tell you the same thing. It certainly provided a lot of laughs in my office.

    There are a lot of people with college degrees and lots of programming experience who would say that it is a quite well thought out article. Many with the same credentials would say that the article is mediocre flaimbait. Either way, if this is what produces laughs in your office, then you need counseling.

    Overall, the author tried to point out that no language is appropriate for all projects and tried to set parameters where one language/style of programming would be better than others. Of course, some people who only know OOP, believe that OOP solves everything and get upset when someone suggest differently.

  • I think you've misunderstood me, I most certainly was NOT claiming that OOP "invented" polymorphism, or anything of that sort. I was merely saying that polymorphism is one of the three main things that "OO" encompasses (and typically support at a language level). Nowhere does that even vaguely imply "invention". In fact I even rather explicitly stated that encapsulation is quite a natural concept in pretty much any paradigm - so how exactly did you arrive at the conclusion that I (or anyone else for that matter) was claiming that OOP *invented* those things? If you've seen that, its in your own mind, not in my post.

  • How people like just get it instead of getting gone right. If you start your project does not matter which language you use or the programing method as long as you choose the right right for the job. At school people are always asking way I use c instead of c++ I simply say I do not need c++ right now therefore I do not use it.
  • But what evidence do you have that these concepts lead to good design

    Holy cow, did you even read the part of my post you are quoting? I said it DOESNT lead to good design. Good design has nothing to do with the paradigm you're using. You must have been in a seperate universe or something when you read my post, you've missed the whole point.

  • I think you'd better have a look at this, pal.

    http://www.angryflower.com/bobsqu.gif [angryflower.com]

    Apostrophes are certainly NOT used for plaurals in any way, shape or form! I think I know who needs a slap in the face...
  • Orbix. Bwwhahahaha! We ditched that for TAO (a free open source, standards complient ORB). Even if Orbix were the fastest, the cost you pay in bugs, non-standard implementation and dreadful customer more than outweighs any benefits.

    It's 2 years since I used it. Back then it felt like a bad port from UNIX to Win32. We couldn't have any COM DLLs talking to our CORBA services. Why? Orbix spawned some threads and provided no way of deleting them. This caused crashes when the COM DLL unloaded, or it caused a crash on exit as the process shut down. Unusable.
  • Unfortunately some people have strange hangups about language. For example RMS, for whatever reason, urges [gnu.org] GNU programmers to write ANSI C and *only* ANSI C. So you get things like GNOME and GTK, which, though thoroughly object-oriented in design, have to be written is the object-oriented C idiom.

    Looking through the sources of GNOME and GTK is quite a trip--it *seems* well-organized, and probably is. But the language idioms used are frightening--it's C, and legal C at that, but at what cost? No one offers training in this dialect--universities use object-friendly languages when teaching object-oriented design, and businesses use object-friendly languages when they need object-oriented design.

    The clearest book on C++ I've read comes in near 1000 pages. If I want to contribute to most GNU projects, I have to generate the knowledge of their equally complicated pidgin OOP-C for myself by staring at reams of barely-commented source code. If written in C++, the code would be identically fast and have 25% the footprint, and probably fewer bugs too. But RMS says no, so they don't do it that way.

  • A company named Thoroughbred [tbred.com] has an Object-Oriented programming environment built around Basic called OpenWorkshop. The company I work for has a several million lines of code written in it.

    Now, I'll be the first to admit, their idea of "objects" strains the definition a bit, but with a very loose interpretation of what OOP is, this fits the bill.
  • Well, I will pit well-designed procedural/relational apps against well-designed OO apps any day.

    I would claim that any well-designed procedural app looks a hell of a lot like an OO app. Relational and object are technologies that can go together, they are not mutually exclusive concepts.

    I see nothing inherant in OO that really improves maintenance in the business domain. Inheritance does not model business things well, and OO's version of HAS-A is messier than procedural IMO because you have to have an object reference attached to lots of stuff and find a class to stick every method into, even if it should be self-standing.

    Inheritance != OO. Inheritance is a necessary property of an OO language, but that does not mean one should use inheritance every chance you get. Properly used inheritance is like a good spice--use it sparingly. A sign of a bad OO system is one with deep and/or complex inheritance hierarchies.

    The advantage OO has is that it is a discipline that describes how you write well-designed components. Procedural is not analagous to OO, as it does not prescribe in any way how you properly design your code.

  • [quote] It is a clean module packaging mechanism that encourages cleaner interfaces between modules [end quote]

    Modules are available in some procedural languages also. (Don't bring up state. It was already mentioned.)


    I was specifically referring to the benefits of C++ over C, and in the real world many programmers don't even have the discipline to structure C into module definition (.h) and implementation (.c) files, or even to use #ifnedf/#define protected headers. I programmed in Modula-2 for many years, and it's a great language, but hardly an option for most projects today, and certainly lookigng rather dated in terms of features.

    [quote] It encourages opaque data interfaces (method access vs public access) which results in less bugs [end quote]

    Real-world business example?


    How can you feel qualified to write an opinion piece on design technique (OOP) if you can't understand this without an example???

    Let me spell it out for you:

    If you make a C++ data structure member private, and *only* accessible via methods, then you remove the possibility of people using your class accessing or modifying the data structure in incorrect ways. One source of bugs made IMPOSSIBLE.

    BTW, who said anything about business programming?

    If anything, this tends to create set/get bloat--One of the silliest structures that is just crying out for some sort of cleaner factoring. Also, they can often be replaced by good data dictionaries.

    Usually the data structures involved would be something more complicated than a variable that can be "set or get"!!!

    With your comment on data dictionaries, you again seem to want to limit the discussion to business, and specifically database related, programming! Odd.

    [quote] It makes use of self-initialization/cleanup (constructors/deconstructirs) that avoid a whole slew of programmer errors. [end quote]

    I just got chewed out by readers who claimed that defending my garbage-collection myth was a strawman because nobody believed except a few dummies.


    I'm talking about C++ constructors and destructors period. Who said anything about garbage collection??

    In short, you are comparing C to C++. C is NOT the pennacle of procedural/relational programming.

    Is this meant to be a practical discussion or not. In the real world there are very few widely used languages. Outside of the modern scripting languages like Perl/Python/etc, and leaving aside specialty anguages like VB, you've basically got C, C++, FORTRAN, COBOL, Java, and Ada if you work in the defense industry. Where's your pennacle [sic] of procedural languages? Ada?

    [quote] The self-containedness of objects does make code reuse simpler and less bug prone. [not quote]

    Specific example?


    Huh? Are you challenging that statement???

    Perhaps you think it's easier to resuse code when there's MORE to think about and get right, rather then LESS?

    Again, these all sound great on paper, but when specifics are looked at by the non-indocrinated, the ugly grey of OO thinking starts to ooze out.

    I'm neither talking about on paper, nor am OO "indoctrinated". I've been programming for over 20 years in languages including 6502/Z80/68000/16032/x86 assembler, BASIC, FORTRAN, COBOL, Algol-W, LISP, Pascal, Modula-2, C and C++. IMO C++ is simply the best general purpose language available today for utilizing the sound prgramming techniques I've learned over the years, and OO is to a large degree just a modern name for design techniques that have been around for years, but are now better supported.
  • by AME ( 49105 ) on Wednesday January 10, 2001 @08:25AM (#520335) Homepage
    Many OO debaters often stretch the meanings of the the big three (poly, inher, encap) to mean whatever they want them to mean. ... "Encapsulation" is such a vague, watered-down word that it can mean just about anything you want it to.

    Just because many people don't understand the definition does not mean that the definition is vague or watered-down.

    Anything that is "not at the right spot when you need it" can be blamed on "lack of encapsulation".

    I would more likely blame lack of coherent design. But lack of encapsulation can result from the programmer hacking his way around the encapsulation to solve the problem.

    Note that many of the goals of OO can be achieved in ways that are not necessarily OOP.

    Agreed.

    --

  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Tuesday January 09, 2001 @04:26PM (#520336)
    It's also true that you can do good object oriened design with a Turing Machine, implemented in the Game of Life, composed of a million midgets wearing reversible parkas, which is directed from above by an Elvis impersonator in a hot air ballon shaped like a guitar.

    You signed an NDA on that design. Expect a call from our lawyers.

    --
  • Can you get these in C? Yes! File scoping is the most underappreciated feature of C

    Pray tell, how do instantiate a file at runtime?

    --
  • > To use a language (any language) you have to grok it

    I don't know why, but I just can't stop chuckling after reading that

    --
  • Yeah, exactly.

    So, maybe I exaggerated with the "massive improvements" bit, but I'll tell you that it's certainly more fun to code using some kind of plan/design. In my case, usually OO, and usually in C, C++ or Perl. Nothing better than reuseable objects if you ask me.

    At my previous place of employment, there was an attempt made to do that very thing: build a reuseable framework of objects for the business software. There were problems galore with making this work because one of those drag and click types of commercial "programming" tools was chosen, where you create graphical objects on screen and then add code inside the objects, code that's tied to that graphical representation and to that object. In other words, the code was not factored at all, and if you needed a different widget, or even no widget, that implemented the same business rules in code, then you had to rewrite the functionality from scratch. Very poor planning, and just shows that the problems with OO are not really problems with OO but with OO that is misapplied by developers who don't really understand the paradigm.
  • You just got through saying that most of the time inheritance is used sparingly in good OOP?

    Sparingly is not the same as not at all.

    What is left? Encapsulation? That is a loaded issue because what aspect to "encapsulate" by is not cut and dry.

    Encapsulation, polymorphism, and abstraction. And encapsulation is the big piece. In OO parlance, it means to capture the data behind a concept along with its behaviors--and not expose that behavior and data willy-nilly to the public.

    So OO is all about component design?

    Not entirely, but it is the most critical element. An OO language is simply a language that maps well to OO component designs.

    But there is also OO analysis.

    That does not sound like a majority OO fan stance to me.

    It is probably not what you hear from the inexperienced Java and C++ programmers who think they know OO because they are using an OO language. However, most OO experts will tell you the language and the technology only have a secondary importance to the OO-ness of a system. It is the design.

    Besides, the making of components themselves are not what most custom application programmers do. Does that mean that OO is nearly useless for them?

    What are you talking about? Custom does not imply not componentized. A custom application can be built from existing components. Similarly, just because you are writing something for the first time does not mean it is not a component.

    Procedural tends to emphasize grouping (encapsulation) by actions, while OO tends to emphasize grouping by nouns. This is perhaps an over-generalization, but seems to hold true time after time.

    This is a crude description that serves only to help someone who has no understanding at all of the difference between OO and procedural so they can get an initial grasp of the concept. OO is about modeling a system, not programming. It looks at the processes to be modeled, aka use cases. Note, OO starts with actions! The system then looks at the things involved in these actions and the ways in which they interact. Then classes are built to model the things and their interactions. OO is things and their actions, procedural is nothing more than actions. And that is why procedural applications are poor models of the processes they are supposed to be modeling.


  • A good solution in C isn't quite as simple as you described. Specifically, your circle is described by parameters top, left, bottom, & right. Not only is that an inefficient way to describe a circle, it is ambiguous for other simple shapes (e.g., two different ellipses can have those 4 parmeters be identical). So, to do it "right" in C, you'd have to do something like having your shape structure hold a pointer to another structure that holds the shape parameters.

    First, let me mention that I had originally included a line "//..." after the "members" and "methods" declaration lines in the struct, to indicate that more would go there, but Taco's so-called "lameness filter" once again demonstrated its own lameness by tagging them as "junk characters", so I had to take them out to post it.

    I don't disagree with you much, but my example still deserves some defense. It may not be a good (efficient or even sufficient) way to describe geometric shapes for mathematical purposes, but it is good for objects that correspond directly to stuff that is to be drawn to some output device. Most 2D graphic toolkits I've seen have a DrawOval() function with exactly the same interface as DrawRectangle(), i.e., they take a Rectangle (struct or individual coordinates), and disambiguate the ellipse by using the one with vertical and horizontal radii. Even if the rectangle coordinates have nothing to do with the specific shape, they are needed because the code that manages the shapes together generically will need to know each one's bounding rectangle, so this information is universal to all shapes. In OOP terms, it would be in the abstract base class; in non-OOP syntax, it would be in the struct as I have it, along with (as you say) a pointer to another struct with any additional (you didn't say that) shape-specific information, which would have gone in the place of my lamented "//...".

    Note the phrases "In OOP terms" and "in non-OOP syntax" -- this gets back to my main point about OOP as a way of coding vs. OOP as a way of thinking. It's not just (as we all seem to agree) that you can do OOP without the syntax by using some (admittedly, increasingly awkward) techniques. I'm also claiming that there's not (necessarily) anything special about "OOPy" code at any level. In some sense at least (I just seem to keep adding that disclaimer everywhere) OOP is in how you think about it, no matter how you implement it, or even whether or nor you implement equivalent techniques at all. Even if you don't use the tricks we have been discussing with structs and function pointers as substitutes for OOP syntax, your code could still be called "object-oriented" as long as it is structured around entities or things with behaviors and interactions that model stuff, which will probably happen to some degree whether you think about it in those terms or not.

    David Gould
  • RDBMS systems are certainly usefull, nobody said they weren't. However, OO databases do partition things more naturally, at the cost of performance. Often RDBMS systems are wrapped by a middle ware layer to hide the complexity and emulate OO behavior while retaining the performance and scalability associated with RDBMS systems.

    Comparing OO Components to procedures is just plain silly. They are different concepts. Read Szyperski's book on components (and yes, non OO components do exist).

    "The real world does not give a rats behind about fat-type trees. It will change any way it feels like."

    The real world is also full of fools who make the same mistakes over and over because they think they are experts while in fact they are not.

    "A graph-oriented approach like relational modeling and HAS-A relationships model this aspect of the real world better."

    You should take a look at adaptive programming and aspect oriented programming. (use google to find the relevant material). Both approaches extend the OO paradigm to support what you want.
  • Another sounds-good-on-paper statement that I have been unable to translate into real benefits. How about a realistic example. I find it hard to draw a single wall around things in a system without creating a nasty little bureaucracy for the team. The "walls" depend on the aspect that you want to view things by at the moment.

    No, you appear not to understand OO encapsulation at all. The aspects you speak of are nothing more than views of the object, and it has nothing to do with encapsulation.

    OO encapsulation actually works very well in the real world. Consider, for example, the use of a Date object instead of a char array or even a long. With the Date object, you have one place to fix your Y2K problem if you made the mistake of storing the date internally in the Date object as a char array with the year in 2 digits. If you make the same mistake in the procedural world, you end up with a big ass bug that costs a hell of a lot of money to fix.

    Look at something like the OO Visitor Pattern. That is the mess created by increasing the aspects by *one* dimension. Add another, and kapoof!

    If you think it is a mess, then you do not understand the pattern. The Visitor Pattern helps eliminate the mess the same code would otherwise have in either OO or procedural realms. And, the Visitor Pattern is also overused in places where it is not appropriate.

    Close enough. Languages should only support fairly common things, not blue-moon things. Otherwise, it ends up being and electronic packrat or offer arbitrary features for bored programmers to futz around with (and they *do*).

    Ugh, it is not "close enough". It is nowhere near close enough. Most classes in an OO system will inherit from something else. Sparingly means that your inheritance trees are simple and shallow.

    Yeah, everybody bashes everybody else's modeling techniques, even WITHIN the OO community. This is what happens when there are no metrics. Metrics are the cops of ideas.

    First, this paragraph changes the argument. Your original point was that a lot of OO "fans" seem ignorant that OO is about design, not coding. Now you say that the problem is everyone is bashing everyone else's modeling techniques and there are no metrics. A different argument.

    On the new argument, however, you are still wrong. There are formal processes, with the Unified Process being the most widely accepted process. And as with anything, there are expert OO modelers and there are wannabes. Expert OO modelers are getting the job done.

    I hope you are not one of those who try to generitize everything.

    Nothing I said suggested I do. So the rest of the paragraph that follows is meaningless since it is based on that assumption. However, your claim that you can never have true genericity is absurd as trying to make everything generic. What do you call the Date class in Java?

    On use cases, This is not really OO, per se.

    Uh, yes it is. It is the starting point of OO software engineering. Non-OO processes do not model things in this manner. They start with workflows and decision trees, which are very, very, very different from use cases and object interactions.

    The rest of your post is on how you think your tables and procedures are the best models for a reporting system. First of all, your dismissal of OO for this problem shows a poor understanding of how the modeling should be done. ER data models are nothing more than collections of DATA and ways of relating the DATA. While the data is data about things in the real world, the table structure is only indirectly and poorly a representation of the thing being modeled. The result is that systems relying on the relational database as a model of the business processes end up being difficult to impossible to maintain over the long haul.

  • I'm not really that concerned with convincing anyone, in truth. If there are good tools available and you choose to make your own work harder by ignoring them, it's no sweat off my back. :)

    However, here's a quick example where OO excels. I was in the game industry for many years. Virtually all games involve a bunch of "things" moving around on the screen, anything from spaceships and laser beams to puzzle pieces falling from the sky.

    For years I struggled trying to reconcile the fact that all of these "things" are very similar in many fundamental ways. They all move around and bump into other things. They all need to be saved to disk and loaded again at savegame time. They may need to blow up or otherwise be destroyed when certain game events happen. And of course, they need to display themselves on the screen somehow. Yet they are all very different - the logic that controls, say, an homing missle is quite different from the logic that controls the player character, for example.

    During this time I wrote hacks involving void points, casting, functions with huge "switch" statements at the top to try to figure out what something was, and other roundabout ways to handle all the similar functionality of these things in one place, yet still allow for all the special code that goes with each type of thing.

    OO solved all of that nastiness for me. It pretty much works the same in the end, but it's much easier to code, and *vastly* more maintainable. I can now make a base class entity, and derive all my specific types of things from them. I have virtual functions for Move(), Collide(), AI(), Update(), Draw(), which have defaults, but I fill them in with special code for each type of thing, and I never have to duplicate my basic motion/collision/whatever code, nor are there any messy hacks to be able to put all these types of things into one giant list of entities, but still have each thing "know" what it is and how to behave.

    It's more complicated than this, but that's the gist of it. Every game company I've worked at has used this scheme almost exactly (although the terminology may differ) because it works so well.
  • Who said anything about hierarchies? OO doesn't have to be hierarchical at all. Few OOLs require all objects to be derived from some base class. C++ certainly doesn't.
  • All the things you've mentioned are compiler isssues.
    FunOne
  • The author makes two valid (but unoriginal points): firstly that
    trying to do everything with inheritance is bad (every non-novice OO
    programmer agrees), and secondly that you don't support for classes
    and objects to get the benefits of object orientation (but you need
    first-class functions and dynamic/existential/abstract types, obvious
    to the LISP world for over twenty years).

    Object orientation makes it easier to manage changing interfaces.
    Support for object orientation makes it easier to achieve this in
    programming practice. The author seems so upset about bad designs he
    has encountered caused by an inflexible, deep inheretiance that he
    seems blind to the fact that OO support can facilitate good design.

  • One of the basic assumptions is that the human brain is built to think about the world in terms of things that have properties and behaviour. We can think in terms of procedures and execution flow as well, but we're not nearly as good at it.

    The problem is, humans tend to think in terms of something acting on something else. With the data and methods mixed together, it feels like building a house by 'asking' the nails to hammer themselves rather than 'asking' the hammer to act on that nail.

  • by SpinyNorman ( 33776 ) on Tuesday January 09, 2001 @08:52AM (#520498)
    The article is vapid since it addresses a collection of OOP myths that don't cover the real benefits, and secondly comletely ignores the practical reality that for most people the OOP/procedural choice is C/C++ choice, and that C++ has many features (such as constructors/deconstructors) that while not necessary for OOP, are huge benefits.


    Myth: OOP is a proven general-purpose technique
    Myth: OOP models the real world better
    Myth: OOP makes programming more visual
    Myth: OOP makes programming easier and faster
    Myth: OOP eliminates the complexity of "case" or "switch" statements
    Myth: Inheritance increases reuse
    Myth: Most things fit nicely into hierarchical taxonomies
    Myth: Self-handling nouns are more important than self-handling verbs
    Myth: Only OOP has automatic garbage collection
    Myth: Components can only be built with OOP
    Myth: Only OO databases can store large, multimedia data
    Myth: OODBMS are overall faster than RDBMS
    Myth: C is the best procedural can get
    Myth: Implementation changes significantly more often than interfaces
    Myth: Procedural/Relational ties field types and sizes to the code more
    Myth: Procedural/Relational programs cannot "factor" as well


    Classic OOP features such as inheritence may generally be over-hyped, but specific cases such as polymorphism are very useful (even if there are ways of doing the same thing in languages such as C that were not designed for OOP).

    Some of the major benefit of OOP, and specifically C++'s objects, are:

    o It is a clean module packaging mechanism that encourages cleaner interfaces between modules

    o It encourages opaque data interfaces (method access vs public access) which results in less bugs

    o It makes use of self-initialization/cleanup (constructors/deconstructirs) that avoid a whole slew of programmer errors.

    o The self-containedness of objects does make code reuse simpler and less bug prone.

    etc, etc.

    The guy who wrote the article (and the esteemed Mr. Taco) appear not to have much hands on experience writing OO code.
  • Properly implemented, code re-use can pay off immediately.

    I don't know very much about OO -- I learned about it at University (in Eiffel) but it never 'clicked' and I always found OO design to be something of a strain. I only mention this to demonstrate my bias :)

    However, you seem to be implying that code re-use is something you only get from OO, and that just doesn't make sense to me. I really only code in C and Perl these days, and I reuse code all the time.


    --
  • The article really hyped the idea of table-oriented programming. I've seen both brilliant and horrible example of T.O.P. since I started programming. The horrible examples were, of course, the ones where the developers insisted on absolute purism in every corner of the system. This is true in just about any development environment--the difference between reality and a model of reality is that reality usually doesn't fit into models very well, because it is a inherently a superset of any model.

    And I've seen at least one OOP project go down the tubes because the designers, among the smartest people I've ever met, treated data persistence as just another implementation detail--they had been seduced by the model of a computer that never fails and has unlimited memory.

    But one of the most successful projects I've seen is one where an object-oriented framework provided a transportation and communications infrastructure on which table-oriented applications could be developed. OOP gave us just right level of hierarchical abstraction to make platform-specific issues irrelevant to higher layers; at those higher-layers, the mind-boggling flexibility of table-driven rule sets provided a potent way to model business processes without trying to put everything in exactly one box.

    I do believe we are heading for a convergence of sorts. The tables in question are going to be XML, with Java (or pick another OO language) providing the framework in which (and with which) to parse it.

    --

  • There are lots of paradigms around. You can go ahead and choose your favorite, but let's not make this an OOP versus Procedural battle.

    Check out abstract, typed functional programming in a language like SML, O'Caml, or Haskell (I find it's much easier to write abstract modules in these languages than it is to do so in an OOP language). Check out logic programming in a language like Mercury or lambda-prolog. Don't forget the zillions of other genres and subgenres.

    C'mon, slashdotters, it's your responsibility to ignore the marketing hype and use technology because it's superior. Do your shopping before you fall for something because everyone uses it (procedural) or because it's buzzword-compliant (OOP).
  • i totally disagreed with this article, but i respect the guy, why? because he is not just sprouting shit, he wrote lots of materials and preseneted them from his own humble point of view! learn, learn, learn! is there anyone in slashdot, who is willing to write half as that much to prove him wrong? nah, instead the best most of you can do is flame him with 4 line comments. OOP is not a HYPE, yes it has been overhyped, it is not the cure to all as well, it is here to stay.

  • It tends to be a little too anti-oop though.

    I use OOP and procedural programming because they each have benefits. It enables me to clearly envision the flow of code and make everything modular. It also enables me to manage multiple instances of the same type of object easier. However, it's possible to implement the same functionality in procedural form. And somebody familiar with coding everything as procedures can visualize the code just as easily. If you have used procedural all your life and are familiar with it, then you can perform the same tasks as somebody that is experienced in OOP.

    Using procedures, you may have a separate source file for procs related to working with strings. You could also use OOP to declare a string object, and just put all the functions within the object. Note: The above is just an example, don't EVER waste time with string objects unless you are forced to.

    It's really more of a personal preference than anything else. Synonymous with your wallpaper. In the end, all that really matters is whether or not the software performs its intended function.

  • That's not a misconception at all. I think you need to look at the definition of an "Object Oriented" language. You'll see that to be OO a language must support certain things like inheritance, polymorphism, run time binding (dynamic binding) and many other fun things that you can't do in procedural languages. C doesn't support any of those.

    Since g++ (the GNU C++ compiler) is written in C, C and C++ are equivalent. In fact, all programming languages are equivalent. You could write OO code in a big pile of NAND gates if you really put your mind to it... or on a Turing machine... or on a Neural Net... etc. Basic language theory, that.

    Just because a language doesn't provide the syntactic niceties of (say) Eiffel, doesn't mean you can't create objects with methods and inheritance, and ask them to run said methods.
    --
  • by DaveWood ( 101146 ) on Tuesday January 09, 2001 @09:13AM (#520513) Homepage
    Myth: Implementation changes significantly more often than interfaces

    This is the most stingingly correct point, in my opinion. In fact, in my experience, they change equally often at best. And in cases where code is actually meant to be reused - something which, by the way, some of my smartest friends have told me, after no small amount of experience over the years, never actually happens - its the interface that often proves more likely to need modification...

  • I agree with you on this one Neutron. My favorite comment was:

    I just heard someone say that they found an old procedural program of theirs that used too many global variables and too many parameters. Rather than blame his bad programming or lack of knowledge about procedural/relational organization techniques, he blamed the paradigm and used it as a sorry excuse to proceed with OOP.

    Ironic given his very long rant essentially pointed at bad OOP implementations rather than OOP as a paradigm!

    Of course, I run Cetus Links [cetus-links.org] so I may be biased...

    --

  • have nothing to do with each other© If you reuse object you can stil get stuck with stuff that you dont want© An object has properties, and it may be that the object that you are using has properties that you probably don't want, but if you use that object you get stuck with them©

    Personally I think OOP is okay in some cases© Take for instance Tcl/Tk© Tk is probably the best GUI programming language that I have ever used© Unfortunately that is about all it is good for© Yes you can do more with it, but I think the GUI is is probably the best© Guess what it is done with objects as well© The button, the text the lable the entry, etc© Tk is also not to heavy as far as overhead either© Some may say that java is better, but I have actually tested a simple GUI text editor and the java editor was slower to start and slightyl slower than Tk© I have also used powerbuilder© Now there is a nice OOP language© To bad it is so slow and flakey©

    I have also tried C/C++ and the problem they both have it memory management© And lets face it trying to figure out do I need to destroy or free this can be a real pain in the butt©

    Personally I'd like to see a language that has syntax like perl, but is compiled and effecient like C© Maybe C with garbage collection© Instead of me having to malloc and free, I'd just declair char * item;, and then maybe item = "string"; Then when I am done with that string I'd forget about it© Sort of like Java, but a language that does not require you to do everything with objects like perl© More like a JavaScript / perl / C / Java cross© Powerscript ¥Powerbuilder is close, but is not a general programming language, it is a propritary language©

    Oh and if you look at KDE and GNOME projects, both have major code reuse and one is OOP and the other is not ¥not really, and they are both awesome works of coding©

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • nope - virtual method calls are a language feature, and calling lots of little subroutines is a coding style encouraged by OO devotees.

    On the other hand a compiler could spend time optimizing this stuff more (maybe more inlining of those tiny calls at least).

    The one thing that is a compiler issue in my posting is the poor use of the call/return paradigm (for example using a jsr to make a call, but not ret to get back means the architecture can't do a hidden [branch target predictor] stack cache to keep the pipe moving - many modern cpus, even x86s do this)

  • by Alioth ( 221270 ) <no@spam> on Tuesday January 09, 2001 @09:14AM (#520519) Journal
    OOP (IMHO -- I'm crazy for the acronyms today), is just a fad. Like structured programming was before it.. Unfortunately a lot of these companies today fall into "trendy" programming methodologies.

    Structured programming a fad? OOP a fad? A rather long-lived fad then! Software developers have been using structured programming techniques and OO design for years, and will continue to do so.

    Personally, I believe you should program using the style you're most comfortable and familiar with. If you're trying to fit a mold it will slow you down.

    That's fine if you're working on a small project with maybe one developer. But if you had 30+ developers using their own techniques and styles on a large project (like the >1MLOC project I've been working on for the last 5 years) you'll end up with an unmaintainable mess. If everyone conforms to a standard methodology and style then at least you can maintain each other's code.

    This is one of the differences between software ENGINEERING and code that's been congealed instead of designed. Software engineering should result in a robust and maintainable system. Letting every programmer go off and do their own thing as you advocate leads to the truth in the phrase "If builders built buildings the way programmers write programs, the first woodpecker that came along would destroy civilization!" It sounds nice to let the programmer go off and just write in their own style using their own unique design techniques. But it's terribly naive and unrealistic in the real world.

  • I am *not* a OOP programmer. To be honest, I've never been able to understand it.

    snip

    And you know what? I've never understood OOP. I just don't "get it." Sure, I understand the theory, but when it comes to real work, I've never understood it.

    It's like if everyone started calling the TV a "forbick". Doesn't make any sense does it? I mean, it's a TV. Why call it a forbick?

    To use a language (any language) you have to grok it, and think in that language or - in the case of OO - in that kind of style. Your example of calling a TV a "forbick" is kind of right on - the French, for example, call cheese "fromage".

    You'll never be a good Francophone if you have to think in English, then translate to French, and say it. You have to actually think in French. Same with computer languages. When I first learned C, I was thinking in BASIC and translating. It didn't really work very well. (BBC BASIC doesn't have pointers for one). It was not until I grokked C and started thinking in C that I got any good at the language. Although it seems natural to think in C then code in C++ because the languages are so similar, unfortunately it won't work out very well. You have to think "object oriented" instead of thinking in non-OO and translating. Once you get to that stage, you'll truly grok OO and its usefulness.

    I'm currently learning Perl. I'm trying to start off thinking in Perl rather than thinking in C++ and translating. So far, it's working OK, but I'm still doing things the C way that would be better done the Perl way, so I guess I'm not truly thinking in Perl yet!

  • I understand where you're coming from. Everybody's minds works differently. I found structural/procedural programming reasonably straigh forward. When I switched to OO, it felt orders of magnitude more natural, and I never want to switch back. Other paradigms were much harder for me. Functional programming, such as lisp or scheme where a real nightmare... eventually I learnt how do recursion in other languages from the experience. SQL has been the hardest challenge for me... I just don't think in sets like that! That caused me a lot personal grief and stress... I think I'm a reasonable C++ programmer, but when I had to do some stuff the database, I felt incompetent. Eventually I got my mind around that too, but I'm certainly no great SQL guru (and I doubt I ever will be).
  • Unfortunately, a large contingent of programmers claim that OO is the only way to program, and universally is easier to write/easier to read/faster to code/faster running/etc/etc/etc. You see it in the 'language wars', where you often see "Java is better than C, because the OOP support is better." Please. You should base your style of programming on the task at hand, not try to twist it into the techniques of the current fad.

    There are problems where OOP is good, and problems where it's not such a good idea. If you insist on using one or the other, pick those problems that it's suited for. I admit that I have some real problems with object-oriented programming's approach to things, but I don't want to convince people to give it up. Just don't make me do it. (That's the main reason why I chose not to do CS in school.)

  • Much of this article seems to be OOA, Out Of Ass. The author states many figures such as designing takes 3 times as long with no backup. There are some points but most of the apply to missused OOA&D.

    Object reuse only coming after years. Well, I work for a company that is a big user of object technologies. We have our own data access, mapping, and server objects that are our bread and butter, we can go in and have an application up in little time because we don't have to write most of the functionality, just abstract the business logic.

    Buy in. That is true, if not everyone buys in the programmers will be pulling their hair out, however this is true in the development contract of many methodology.

    Not everything is abstractable/in a hiearchy, true but I find in most of the work I do, it is. There are cases where some of what needs to be done doesn't make pretty objects, but that is the exception

  • I've been programming for over 20 years now, and, unlike almost anyone here, I've been on mainframes, and PCs, and w/s.

    'Bout 7 years ago, I took some coursework in OOPs and GUI (sounds like someone dropping an egg, to me), and what I saw then was this:
    1) OOD looked real good...BUT the closer you got
    to actual coding, the fuzzier it got (as
    opposed, for example, to flow charts), and
    2) what OOP appears to be doing is enforcing by
    compliler all the good coding practices
    that they've been trying to teach for
    25 years (y'all use goto's in C,
    frequently? how bout type checking?).

    What I've seen since then is that it *may* be possible to write good, tight code in an OO language (though I found the person who worked on chips cmts, above, fascinating), the overwhelming majority of coders write *lousy*, bloated code - you want a clipping of Gojiro's (Godzilla, to those who don't know) toenail, and you get the big guy, himself, with a window frame around his toenail. Explain Lose98, or LoseMe, or M$ Office....

    Reusability - isn't that where knowledgable management assigns the position of librarian to someone (*sigh*, probably the configuration manager), and they make sure that programmers use existing subroutines? Oh, right, sorry 'bout that oxymoron about "knowledgable managers".

    Thank you, some of us would rather master our discipline (check the meaning in the dictionary that doesn't refer to whips), and write good, tight, fast code...that can be maintained by someone else, when we've moved on.

    mark
  • Granted, OOP is not the silver bullet that will solve all your problems as some people seemed to think it was. It will not make your programs automatically more easily updated, faster, and give you a full glossy coat of fur. However, OOP is most definitely not a fad, it is here to stay and will continue to be very important. There are many circumstances where Object Oriented design is completely unnecessary and would give no advantages, however there are also many cases where OOP is in fact the magic bullet that makes everything click together.

    Personally I write OOP only very rarely, but that's partly to do with the nature of my work which is more small program oriented than large system oriented. However, I do find OOP to be significantly useful in lots of occasions. One way that Objects are particularly useful is in extending programming languages. The old style of libraries is somewhat cumbersome and leads to unnecessarily bloated code that is more difficult to read (and to understand easily), whereas extension via object makes immenently more sense and keeps your code (which may or may not be Object Oriented) a lot tighter. Perl demonstrates this excellently since most Perl programs are not Object Oriented, but almost all the "extensions" to Perl that are being written (and being used) these days are Object Oriented.

    And in some ways OOP is still playing catchup with older programming styles, it's a lot easier to change languages than it is to change to a whole new philosophy of programming and I think that has shown. OOP is not a fad, it's most definitely here to stay and in fact I think it will grow as more and more languages (other than C++ and Java) take on Object Oriented design and more and more programmers learn when and when not to use OOP.

  • by scotteparte ( 240046 ) on Tuesday January 09, 2001 @09:16AM (#520528)
    It seems our skeptical OOP critic is pulling graphs from nowhere and making logical flaws left and right. Refuting his article is quite easy, but there is no doubt in my mind that he would not accept the refutation. His failure is that he refuses to accept the absolute need for abstraction.

    I work in a software company, and our product takes up no less than 300 MB of code. Even in the most well-organized non-OOP code, our software would be impossible to debug or even build, because we would need to go through hundreds of lines of code. In addition, reusability would be hurt, since even though the functions would be there, minor changes in the arguments might make the entire function worthless.

    The author's example using People and Taxes is particularly striking. He suggests that an object oriented approach would create a person object and a tax object, set their attributes, and run the T.CalcTax method on Person P, while a procedural approach would just feed the relevant parameters to a function. I wonder if the author has ever actually filled out the 1040 Personal Income Tax Return Form [fedworld.gov]. The easy version has 70 entries, and while some are calculations, others are references to other, much bigger, forms! Keeping track of all these variables without some structure to hold them all is just stupid. Object orientation is necessary when code hits a level of complexity where several people, or several hundred, are simultaneously working on a project. The "black box" approach allows for greater flexibility and optimization, since a code change will be transparent to the objects around it.

    Another thing to consider, although I know CmdrTaco would berate me for even mentioning it, is the expansion of OOP provided by Java. The Interface in Java allows you to specify several functions, abstractly, that are required for a class to implement the Interface. The implementation of these functions is class-specific: for example, all clothes implement the Wearable interface, but you would not want underwear and shoes to have the same implementation of Wear(). However, in Java, you may specify a function to take a Wearable object, and not need to specify any further. This abstraction level is why OOP does, in fact, better model the real world.

    In conclusion, the rantings of a PASCAL junkie should not constitute a Slashdot article. Anyone who has coded in procedural and OO languages should see the extreme idiocy of the article within milliseconds. On the other hand, Perl powers Slashdot, so I guess this is the place for the procedural approach to have its message heard. In fact, Slashdot totally vindicates the article: its non-OOP approach is fast, effective, efficient, and easy-to-understand. Highly scalable and expandable as well. I especially like Slash.pm (aka THE BEAST) and the my_conf{shit} variable. I'm sure the non-OOP approach will really take off once everyone switches over from C++/Java to Perl.

  • Like any technique or technology, OO methods must ultimately be incoporated into a larger scheme of structured and generic programming (and perhaps one day, functional programming).

    Ultimately, multi-paradigm languages that can absorb new techniques as they mature, like C++ and perl, will be the winners. I have yet to see a mono-paradigm language have sustained appeal over a five to seven year period.

  • Ignoring the tone of the article, which had a distinctive mid-pubescent feel, the thesis is mildly interesting: can procedural methodologies be as maintainable as object-oriented ones?

    Unfortunately, the thesis was quickly replaced with some teenage-esque rants that have been noted already.

    What seems to have been overlooked is the advantage of OO: code maintainence. In entreprise environments, a great deal of time is spent on project maintenance. Anyone familiar with procedural programming can testify that scanning lines of undocumented and tightly written code can be a nightmare. The old ideals of syntatic efficiency at the expense of readability, while fun and good in the hacker world, don't fit into the corporate world.

    OO attempts to solve this problem by introducing a methodology that makes it easier for programmers to make their code maintainable.

    This is analogous to the manager that keeps all his notes in his head versus the manager that documents his job and his notes. Both can probably do a great job, but when they move on, which one do you want to replace? I'll take the latter.

    All the arguments about procedural programming being faster and easier than OO are moot: it's the nature of the beast. But OO is definitely a great tool for facilitating ease of code maintainence and leveraging past effort.

    I can't believe the guy that wrote that post has had any career of significant length.

  • I wonder how much C++ code is actually OO, though?

    I've seen any number of C++ books which are effectively C books, using cout instead of printf, with an object orientation chapter tagged on as an afterthought.

    I'll bet a lot of Windows programs use OO to work with the GUI (because MFC makes them) but have an underlying program structure that's procedural to the core.

    I've no doubt that OO is heavily used, but you can't get comparisoms just by counting how many programs were built with a C++ compiler.
    --
  • by account_deleted ( 4530225 ) on Tuesday January 09, 2001 @09:00AM (#520534)
    Comment removed based on user account deletion
  • Taco is a perl c0d3r; what do you expect. The relationship between perl c0d3rz and programmers is in the 'distant admiration' category.
  • by Greyfox ( 87712 ) on Tuesday January 09, 2001 @09:17AM (#520537) Homepage Journal
    The hype of OOP is that it's a magic bullet that does all your thinking for you and that all you have to do to write a completely new program is derive a new class and change a couple of lines. This is, in a word, crap.

    Thing is, if you can't do design in procedural code, OO's not going to buy you anything. Projets that buy into the aforementioned hype will fail. A company throwing OO at a projet without investing in training of it programmers will see the project fail. There are some simple OO mistakes that you see new programmers or procedural programmers who've never done OO before. Deriving everything from everything else. Using inheritance for everything. Habits that it's hard to break them of because nobody realizes that they're bad habits.

    Do a bit of research though, and you can find ways to be very productive using OO. If the designs are good, it does actually get easier. Buy "Design Patterns" and poke around in the various OO language netnews archives for little nuggets of wisdom and you will find your object designs improving immensely. Read about STL and use well designed toolkits like gtkmm and you'll start to realize benefits from using Object Oriented programming.

  • by cje ( 33931 ) on Tuesday January 09, 2001 @09:00AM (#520538) Homepage
    I think you need to look at the definition of an "Object Oriented" language. You'll see that to be OO a language must support certain things like inheritance, polymorphism, run time binding (dynamic binding) and many other fun things that you can't do in procedural languages. C doesn't support any of those.

    Of course it "supports" them. It's just not as easy as it is in other languages. As has been pointed out before, creative use of function pointers in C can be used to implement polymorphism and "dynamic binding."

    Think about it a minute. What does a C++ compiler do? It translates the (high-level) C++ code to (low-level) assembly code. Are you somehow suggesting that there is no way that the generated assembly code can implement inheritance and polymorphism because no assembly language "supports any of those?" If so, how is it that C++ programs are able to compile, link, and execute? The original C++ compiler, cfront, generated C code as output from the input C++ code. Surely the output C code was no less "object-oriented" than the C++ code it was generated from.

    You can write object-oriented code in nearly any language. The difference is how much language-level support for OO is provided. Just because you can't write "virual void myFunc()" in C doesn't mean you can't generate the same behavior.
  • Seriously, did anyone read this before they posted it?

    OOP is communistic?

    No references? (Sorry, we lost them...)

    A giant stinking POOP image header?

    perhaps this should be under the humor category...
  • For a code example, suppose we have an application that calculates taxes for individuals. A common object oriented approach is to create a Person class and a Tax class...

    Nope, I'd just add a calcTax function to the Person class I already had. Maybe this explains why my objects seem to have a lot of functions....
    :-)

    To take the Tax and People example you have given, you have overlooked the fact that an object oriented approach allows inheritance and lots of other stuff. For example, if there was more than one type of tax T on a person P, one could still call a generic function T.calcTax(P) for both InheritanceTax and IncomeTax classes which inherited from the basic Tax class.

    Object Orientation is not an answer to Life, The Universe and Everything. And you often find that one persons idea of an object oriented approach to a problem is totally different from another persons idea. But having said that, I find that it is easier to work out what someone has done if they have used an object oriented approach than if they haven't. For one thing, most people implement all functions of each class in a single file named after a class, something you're never quite sure of with a parameter approach.

    Object orientation simplifies maintainability, by encouraging people to write in a style which may be easily understood, something many other methodologies have singularly failed to address.

  • by piggy ( 5857 ) on Tuesday January 09, 2001 @09:20AM (#520547) Homepage
    OO does not bring benefits for all projects. OO does not always bring short term benefits. Where OO excels is maintainability and understandability, as well as distribution of labor for a medium or large scale project. A 15 hour project is nothing. Worst case if you need to expand or maintain that project, you waste another 15 hours. A large scale multiple year project, however, truly benefits from an OO approach.

    If implemented with discipline and knowledge, an OO project is better maintained than a procedural one. However, I would take a disciplined procedural project over a messy OO project. The overall guiding point is rigorous software engineering, and OO provides some language level discipline, while procedural programming provides nothing at the language level. If a good process is followed, though, language level support is just convenient fluff.

    Russell
  • The author is correct that OOP won't solve every problem. But he tosses the baby out with the bath-water! The author obviously hasn't read any of these great books/articles:

    What is Object-Oriented Programming? [att.com] (Link to papers, since I can't find the .pdf for this one)

    Multiparadigm Design and Implementation in C++ [computer.org]

    Design and Evolution of C++ [att.com]

    Now I'm not saying C++ is the end-all and be-all, but every language was designed to solve a certain problem. Use the right tool for the right job! If C++ lets you solve your problems quickly and efficeintly, then use it. If not, then use what works.
  • You obviously have a situation that calls out for OO design and programming. Now explain how that means that absolutely everything to be programmed has to be programmed in an OO language.

    What I got out of the article (which I thought was poorly done) was that the problem isn't OO itself, but that OO is being overly applied where it isn't needed, and even where it gets in the way.

  • by Lumpish Scholar ( 17107 ) on Tuesday January 09, 2001 @09:45AM (#520555) Homepage Journal
    ... it's been around (and used successfully) for over fifteen years?

    As Jim Coplien has pointed out, OO (objected oriented programming / design / analysis) is older today than "structured" programming / design / analysis was when OO first burst upon the scene. (The structured movement first got some serious press in the mid to late 1960s; the classic book by Dahl, Dijkstra, and Hoare was published in 1972. OO started no later than Simula-67 and Smalltalk-72, and first gathered mainstream attention in the 1980 - 1982 timeframe. The first OOPSLA conference was in 1986.)

    Yes, some snake-oil salesmen overhype OO ... or whatever buzzword they can apply to their product. Surprise.

    No, OO is not a panacea. It's not even always the right tool to apply to a particular design or programming problem. (Coplien's recent book, Multi-Paradigm Design for C++ [fatbrain.com], is a tough but worthwhile read that addresses this issue.)

    You may dislike a particular language that supports OO (Smalltalk, C++, Java, even Perl) but find the paradigm worthwhile in some other language.

    For comparison, compare with this message in Risks Digest [ncl.ac.uk]: "The structured programming revolution is a real bad idea that has been significantly holding back progress for years.... Have there been any double blind studies which unambiguously show that the kind of programs that structured programming partisans enjoy are really more maintainable than some other kind of program? I've heard lots of testimonials, but no real evidence." Sound kind of familiar? (Heh.) --PSRC
  • The key is as you say: properly implemented. Too much oop has been implemented by the sorts of folks who will term a database as "almost relational". It takes intelligence and rigor to make sure that the corners are not cut that will reduce your small amount of oop to a very large amount of smoking oops.
  • by rkent ( 73434 ) <rkent@post.ha r v a r d . edu> on Tuesday January 09, 2001 @09:02AM (#520558)
    Ha! Man, he's probably trolling around here, too. I wonder what his slashdot ID is? Probably the same one who submitted the story ;)

    Wow. I think /. has been seriously trolled this time.

  • I agree that for some applications, making an object->relational mapping is just a giant pain in the flipper. For the kind of timelines project managers impose on web projects, and the terrible ever-changing specifications given, it's usually easier to fill up an Oracle table and worry about elegancy later. This guy is probably recovering from wounds from some sort of overanalyzing OO web consultancy, having been forced to wear UML diagrams as underwear.

    Technology is rarely the problem -- the usual problem with projects and OO is that the zealots win, and centrists who want to use the most-suited technology for the specific application get ignored or called "hacks". The over-ambitious 3-tier agent-based project gets canned, and the "hacks" get called in to fix it using a less long-bearded method.
  • I don't think its a fad. Structured programming is still the model we use, even when using it within object-oriented programming.

    The guy who wrote the article missed one of the most important aspects of OO, and that's _interface_ inheritance. Interface inheritance is _NOT_ subtyping, and is vastly more flexible and usable than subtyping, which seemed to be one of his big gripes. If you want to know more about interface inheritance, look at my page at

    http://members.wri.com/johnnyb/comppapers/factor in ginheritance.html

    I called them "feature factors" here.
  • by Gorimek ( 61128 ) on Tuesday January 09, 2001 @09:04AM (#520566) Homepage
    Does OO make assumptions about human nature?

    Actually it does. But it's a correct one.

    One of the basic assumptions is that the human brain is built to think about the world in terms of things that have properties and behaviour. We can think in terms of procedures and execution flow as well, but we're not nearly as good at it.
  • by BinxBolling ( 121740 ) on Tuesday January 09, 2001 @09:21AM (#520567)
    If you add the methods to manipulate the structure (via function pointers if you're using C), then you have an object.

    What do you think the "window_init", "window_draw", and "window_destroy" functions were? Just because these functions weren't formally defined as methods of the window_t object doesn't mean that they aren't methods. In the programmer's mind, they're methods, and that's what matters, in the end.

  • by Samrobb ( 12731 ) on Tuesday January 09, 2001 @09:05AM (#520579) Journal

    The article is topped off with a gif image tagged "OOP Stinks!". That should give you a good insight into the level of discourse that follows.

    Of course, the author absolves himself from all responsibility for having to present anything more than an emotion-filled diatribe by stating early on:

    Disclaimer: not all myths discussed are necessarily shared by all or even most OO proponents. They are based on "common notions" found in the business and programming world as I have observed them. I am making no claims as to the frequency of beliefs in the myths. My frequency of encounter may differ from yours. Discussing myths is not the same as "punching a straw-man", which some OO proponents claim I am doing by mentioning myths.

    So... his article is based on debunking "OOP Myths", which he states are not "necessarily shared by all or even most OO proponents." He repeatedly fails to back up any of his points with citations or references (and at one point, actually states "Sorry, we accidentally lost the references.") Instead, he justifies his arguments by making blanket statements like "Many OO books and even some OO fans suggest that OO only shines under fairly ideal conditions." Which OO books? "Some" OO "fans"? (Remember the disclaimer - not neccesarily all or even most OO proponents...)

    Finally, some of his commonly (or not-so-commonly - take a look at that disclaimer, again) believed OOP myths are outrageous to the point of being silly... OOP eliminated the need for case or switch statements? OOP makes programming more visual? Only OOP has automatic garbage collection? Components can only be built with OOP? Only Only OO databases can store large, multimedia data? Who, exactly, does believe these myths? PHBs? Certainly not anyone with a CS education or decent amount of programming experience.

    The best thing I can say about this article is that I think the author has a few good points and compelling arguments that are, unfortunately, lost amid the noise and confusion of unsubstantiated facts. If you can read it through and keep from grimacing in pain as OOP is compared to communism and the lack of research in non-OOP languages is decried, you might be able to find an idea or two that will reward you for the effort.

  • by f5426 ( 144654 ) on Tuesday January 09, 2001 @09:05AM (#520583)
    the quote on top of the object oriented langages chapter was something like:

    "object: to feel distate for something"

    I laughed my ass off (even if I do OO programming since 1991).,

    Cheers,

    --fred
  • by WillWare ( 11935 ) on Tuesday January 09, 2001 @09:27AM (#520590) Homepage Journal
    I work in C and Python. In C, I occasionally do a tiny amount of C polymorphism using function pointers, but it's infrequent. Python's OO model is very easy to deal with, but I find that prematurely OO-ing my code is as bad as premature optimization. If I'm really lucky, my objects will be useful for some later project. This is by no means guaranteed.

    Objects are great where they work, and where you have the time and experience to tune them to perfection. The Python libraries are full of beautifully crafted, wonderfully useful object definitions. But that investment is large, and in many cases, doesn't make sense for the purpose at hand. And there are problem domains for which objects simply aren't the natural description.

    The OO people say that the wrong way to reuse code is the cut-paste-tweak method, because then you have two diverging copies floating around. In a perfect world everything might be in a source code repository and I could submit a change rather than spawn a private tweak. But change submissions mean bureaucracy, if I'm working with other people. If my tweak will never see public use, the overhead is an unnecessary diversion.

    The cited geocities page makes noise about table-oriented programming. I remember hearing similar things in the past, stuff like "Put the intelligence in your data and keep your code simple". I would have liked to see a better description of TOP, perhaps a few pointers to tutorials. The guy's own descriptions are pretty useless for quickly grokking his point. Maybe he's only preaching to the database crowd, and I'm not supposed to get it.

  • by TDScott ( 260197 ) on Tuesday January 09, 2001 @08:29AM (#520629)

    Disclaimer: not all myths discussed are necessarily shared by all or even most OO proponents. They are based on "common notions" found in the business and programming world as I have observed them. I am making no claims as to the frequency of beliefs in the myths. My frequency of encounter may differ from yours. Discussing myths is not the same as "punching a straw-man", which some OO proponents claim I am doing by mentioning myths.

    So... this is based on his experiences, without research? He has based this piece of writing on merely his viewpoint? Surely, if any technical critic wishes to be taken seriously, he should back his work up with proper figures and research, rather than "myths".

    Communism also looked good in theory, but its base assumptions about human nature were flat wrong!

    Okay... he's comparing OO with Communism? I don't see the connection. Does OO make assumptions about human nature?

    This seems far too much like a rant, backed up with a few web pages... I would not take this seriously.

  • by garoush ( 111257 ) on Tuesday January 09, 2001 @09:10AM (#520630) Homepage
    How far is "... a while back while working on an unnamed CPU project..."? If you are talking 5+ years, or even 3+ years, than you data is out of date as today's compilers are much more efficient about optimizing C++ code.

  • by SpinyNorman ( 33776 ) on Tuesday January 09, 2001 @08:33AM (#520665)
    Sure OOP is a design technique not an attribute of a language (although some make it easier than others), but your example is bogus.

    A data structure is a structure, not an object.

    If you add the methods to manipulate the structure (via function pointers if you're using C), then you have an object.
  • by Shoeboy ( 16224 ) on Tuesday January 09, 2001 @08:33AM (#520674) Homepage
    Personally, I've never been a huge fan of OOP, so I tend to agree on a lot of these points.

    Taco,
    Some of us remember what slashcode [slashcode.com] looked like before pudge and friends started cleaning it up.

    Not only are you opposed to OOP, but you don't seem to be terrible wild about structured programming either. Nor do you give readibility and maintainability the time of day. Your relationship with elegant code is in the "distant admiration" category and you seem to consider sobriety an impediment to productivity.

    Not that I disagree with you on any of these points, I just wanted to mention that we allready know about them.
    --Shoeboy
  • by twitter ( 104583 ) on Tuesday January 09, 2001 @11:50AM (#520695) Homepage Journal
    Actually, you can just look at his pages to find gems like this:

    Actually, most GUI application programmers almost never see the code structure that makes up their screens. They simply click on a screen item, an event selection box comes up (Events like: on_click, on_exit, on_keyboard, etc.), and then a "code snippet" box comes up to edit the event code. Whether that event code is in a method or subroutine does not matter that much to the programmer. If you changed the generated code implementation from OOP to procedural and/or tables, the programmer may never even know the difference.

    This is not a programer!

  • by Mr Neutron ( 93455 ) on Tuesday January 09, 2001 @08:37AM (#520701)
    The linked article seems to have more to do with the "silver bullet" mentality than OOP specifically. Anybody worth listening to will tell you that it's just as easy to screw up a OOP project as it is a procedural project. Really, has anybody used "it's object-oriented" as a selling point since 1989 or so?

    In general, the criticism contained in the article is poorly founded. The author uses some nice charts, but has no citiations for them. For instance:

    The problem is that building generic abstract modules (intended for reuse in later projects) requires roughly three times the effort as a project-dedicated (regular) module. Although no references will be given here, this figure is fairly widely accepted in the industry.
    Accepted by whom? I've never heard that asserted by anyone in my academic or professional careers.

    Some of the things he calls out apply equally to procedural languages, such as:

    When a new language fad replaces OOP, how do you convert legacy Java objects into Zamma-2008 objects?
    When Pascal replaces C, how do I convert my C functions into Pascal functions? Eh?

    He makes some good points about measuring the effects of change (everybody should do that!) but I don't think this really strikes a death blow to OOP.

    Neutron

  • by 0xdeadbeef ( 28836 ) on Tuesday January 09, 2001 @08:39AM (#520735) Homepage Journal
    It's also true that you can do good object oriened design with a Turing Machine, implemented in the Game of Life, composed of a million midgets wearing reversible parkas, which is directed from above by an Elvis impersonator in a hot air ballon shaped like a guitar.

    That isn't saying it's a good a idea.
    --
    Bush's assertion: there ought to be limits to freedom
  • by Anonymous Coward on Tuesday January 09, 2001 @08:39AM (#520738)
    I agree.

    OO is just a handy way of structuring large systems for maintainability. It is extremely useful for what it does, but isn't magic.

    People who dismiss OO out of hand are making the same mistake as zealots who insist that it must be used for everything by rejecting a useful tool.

    Structured programming, functional programming, OO etc are all extremely useful given the right problem domain. The skill is being able to look at a problem and pick the correct tool for the job. Rejecting, or choosing, something automatically can be a very good way to shoot yourself in the foot.

  • by mangino ( 1588 ) on Tuesday January 09, 2001 @08:39AM (#520741) Homepage
    It has been a long time since I last posted to Slashdot. I can normally restrain myself, but this is just pure and absolute BS.

    Properly implemented, code re-use can pay off immediately. I have worked in shops where every time we added a client, we needed a new copy of the code. Even though most of the processing was the same for the new client, we had to start out with a copy of the code. Code re-use would have bought us hundreds of thousands of dollars very quickly. (This did not ocurr at my current employer)

    Properly implemented abstraction and OO along with iterative design can save a large amount of money very quickly. The key is to prototype your interfaces for the application you have in mind. Once you have done that, think of a completely unrelated use of the interface and test that. If you can handle 2 or 3 different uses, you have a good interface to start with. Rinse and repeat for the rest of your system.

    People may question you immediately, however the minute somebody decides to change the system message transport from http to JMS, you should be able to convince them of the value of proper abstraction and code reuse, just change the transport class and you are done. I did this in a system where we did all of the work necessary to change the transport in less than 30 minutes. The consultant that had been working on the same problem for 3 months was absolutely amazed at quickly I made the change.

    OOP is not a cure all, however its use along with proper abstraction can lead to large savings from code-reuse in a short time.

    Mike
    --
    Mike Mangino
    Sr. Software Engineer, SubmitOrder.com
  • by taniwha ( 70410 ) on Tuesday January 09, 2001 @08:40AM (#520745) Homepage Journal
    (for the record I first wrote smalltalk code in the 70's, I regularly code in C++ ...)

    I'm a sometimes chip designer, sometimes programmer ... a while back while working on an unnamed CPU project I did some low level performance analysis on a number of well known programs (maybe even the browser you're using now) basicly we were taking very long instruction/data traces and then modelling them against various potential CPU pipeline/tlb/cache architectures - we were looking for things that would help guide us to a better architecture for our CPU.

    I found that quite early on I could figure out which language something was coded in from the cooked numbers pretty easily - OO (ie C++) coded stuff always had a really sucky CPI (clocks per instruction - a measure of architectural efficiency that includes pipe breaks, stalls and cache misses) - I spent some time looking at this (since it seemed that C++ code would probably become more common in our CPU's lifetime) - basicly C++ code sucked because it took more icache hits (because the coding style encourages lots of subroutine calls which tend to spread over the cache more filling the I$ quickly) and it took more pipe breaks (also due to the subrotine calls and returns - it turned out that some code generators did stuff that broke CPU's return stack caches causing many more mispredicts) and finally virtual method dispatches (basicly load a pointer, save the pc on the stack and jump to the new pointer) tended to cause double pipe stalls that couldn't be predicted well at all even though these weren't done much they were a real killer (if you've one a bit of modern CPU architecture you learn that with long pipes you live or die on your branch predictor's hit rate - these were very bad news)

    In short C++ and more genrally OO result in code and coding styles that tend to make code that makes modern CPU's run less efficiently.

    Anyway - you often hear about 'efficiency of programmers' etc etc for OO - I thought I'd add a data point from the other end of the spectrum.

  • by maddboyy ( 32850 ) on Tuesday January 09, 2001 @08:42AM (#520766) Homepage
    This article can be applied to any kind of programming paradigm. Basically, the author concludes that OOP isn't any good because some developers and managers aren't applying it correctly. Well, that's the case for procedural programming, declaritive, functional, etc. Yes, OOP will not solve the problem of rushed projects, poor management, or stupid programmers; neither will any other programming style though.

    Programmers just need to be familiar with multiple programming practices and languages. Programmers need to know when just hammering out some _properly_ planned procedural code will fit the case better than some _properly_ planned OO code. There is no magic bullet and because of this, I think it's a bit pointless to say that one programming style is leaps and bounds better or worse than another.

    I really wish the author had the confidence in the claims to actually site some hard facts and not some made up claims. Most of the article just seems like old rehashed FUD from the dawn of the OOP movement. The author mentions all of these failed business apps and blaims OOP for their problems. I guess IBM, Oracle, NASA, and some of the other big software shops are a bunch of idiots for doing any OOP. But of course this guy must be an expert on software design practices and that's why he has a Bell Labs URL.
  • by aardvarkjoe ( 156801 ) on Tuesday January 09, 2001 @11:22AM (#520792)
    (Go figure; that post just gave me my bonus back. Time to lose it again :)

    Well, OOP should be used for things where the data structures resemble (in some way) real-world objects. For instance, games and other things that simulate physics can benefit from object-oriented design. Databases with dissimilar chunks of data might be easier to deal with using OOP. It may be helpful to represent other computers or programs as objects, in applications that require large amounts of intercommunication and response.

    OOP essentially downplays procedural programming, but (at least I) tend to find many things procedural, which would have to be twisted up to fit into the OO paradigm. Anything that does an easily-definable process, such as most system utilities and so forth, is probably not suited for OOP. Back to games, I recently wrote a 'roguelike' game: although the code dealing with monsters and items was somewhat object-oriented, the game logic itself was procedural. Programs that primarily deal with large amounts of raw data should probably deal with it in a procedural rather than OO manner.

  • by h_jurvanen ( 161929 ) on Tuesday January 09, 2001 @08:45AM (#520839)
    For a long time now USENET groups like comp.object have been tormented by the author of that article with his constant barrage of FUD and inability to construct meaningful arguments. For an idea of what I'm talking about, check out his posting history [deja.com].

    Herbie J.

  • by slamb ( 119285 ) on Tuesday January 09, 2001 @08:47AM (#520849) Homepage

    The author doesn't describe in depth what I consider one of the greatest advantages of object oriented programming -- polymorphism. Polymorphism is great because it allows you to invoke different methods for the same operation based on the derived class.

    That makes more sense with a real example. I'm working on a set of classes that has a Stream abstraction. Derived classes must provide ways to do raw reads and writes, close the stream, and a few other things. (Preferably, they also provide some way to open the stream -- but that's not a virtual function.) The stream class itself provides buffered IO and some convenience functions for writing stuff in network order, a nice gets function, etc.

    That allows me to have the same easy IO operations on top of many different kinds of streams:

    • OSStream - what the operating system considers a stream. Can attach to a given fd, etc.
    • FileStream - derived from OSStream, adds the ability to open a named object. (kind of a misnomer, could be a FIFO or whatever.)
    • StreamSocket - also derived from OSStream, blends with a lot of socket functionality.
    • PGLargeObjectStream - a PostgreSQL BLOB (binary large object). Basically, like a file but stored in a PostgreSQL database instead of using an inode. Handy because filesystems have limited number of inodes, not good for lots and lots of small files.
    • SSLStream - a SSL connection, requires another open Stream (probably a StreamSocket) to connect.

    Each one of these provides the same basic abilities -- reading/writing, seeking/telling (where applicable), closing, etc...but they do it in different ways. I need abstract read/write code so I can put shared higher-level code on top of it. Otherwise, I'd have to reimplement the higher-level code for each one. That would suck.

    This doesn't even necessarily need an object-oriented language, just an OO concept. OpenSSL, for example, has a set of C functions that do a lot of the same things I'm talking about. It does it by providing a struct with function pointers...basically, a virtual function table. It's definitely not as pretty (I wish they would have just done it right and used C++) but it does work.

    This is just one advantage of object-oriented programming, but I think it's a very significant one. Worthwhile by itself.

  • by BeanThere ( 28381 ) on Tuesday January 09, 2001 @11:30AM (#520858)

    "If an object is data with functions, then I've been doing object-oriented programming all my life"

    Generally when people talk about "OOP" they're referring to three general things you'll find in an OOP language:

    1. Encapsulation: This is what you've been doing, by the sounds of it. Data, along with its associated functions, is grouped into an "object" (called a "class" in C++, I think its called an "object" in pascal, IIRC.) This is called "encapsulation", since both the data and functions are organized into a single entity. This grouping is very easy to do with any imperative language (as well as assembly) and follows traditional seperation of programs into "code" and "data" sections. OOP languages just bring the concept closer to the language by providing constructs like C++'s "class". Something else the languages do is facilitate "data-hiding", by allowing you to specify with language constructs (C++'s "public" "private" and "protected") which members are for general users of the class, and which members are for the class's internal use only. Nothing special here - you can do this in C also. C++ just gives you some language constructs to enforce it at compile-time, and perhaps make things a little more organized and readable.
    2. Inheritance. OO languages typically allow these encapsulated objects to 'inherit' behaviour from other objects. This is quite powerful; with a single line of code specifying what to inherit from, your object gains a whole lot of new functionality "for free". Once again though, there is very little special here, it can all still be done with imperative languages. OO languages just, once again, make this concept more a part of the syntax and constructs of the language. Inheritance also allows you to override methods, so if the object you inherit from has a function, you can redefine that function to behave differently.
    3. Polymorphism. Also called late binding. Probably the most powerful aspect of OOP (although, yes, once again, you can hack something in a language like C that will mimick the same behaviour.) Polymorphism is like inheritance, except the version of an overridden function that gets called is determined at runtime rather than compile time, and uses a table of pointers to methods called a "virtual method table". A little difficult to explain .. if I have a generic "widget" class with a function called "Draw", then specific types of widgets (e.g. scroll bars, buttons etc) would firstly inherit from the general widget, and then override the virtual method "Draw". Now a program that handled the user interface can just have alist of pointers to generic widgets (which will in reality be scroll bars, buttons etc), and it does not have to worry about what sort of widget they are, it just has to call the "Draw" function on each of them when it needs them to be drawn to the screen, and using the vmt, the correct "Draw" will be called depending on what type of widget it is.

      Sounds like you've done mostly encapsulation, maybe a bit of inheritance is natural too. Probably not polymorphism though.

      The thing about OOP is, although it supports these concepts in the language, it doesn't automatically lead to good design, and that is the point that the author of the article this whole thread is about seems to have missed. You still have to think about your design very carefully to build good reusable components. In general it does make you think a little bit more carefully before you just start coding away, which isn't a bad thing. There is nothing you can do in C++ though that you can't do in C. C++ just lends itself more readily to expressing certain types of ideas within the language's native syntax, e.g. picture the "gtk+" toolkit without all the ugly hacky typecast macros and the hacky inheritance structure.

      Presumably there are some OO zealots out there (I personally don't know any but the article author seems to think almost every C++ programmer is an OOP zealot) but in general I think most OOP programmers are more focused on solving real-world problems than on spreading hype. For every rabid OOP zealot, there's a rabid anti-OOP zealot. This guys just one of the rabid anti-OOP zealots. When you really do speak to most people in the business, they *dont* have big issues about programming paradigms. They just use the best tool for the job and are actually mature about it.


  • by Adam Wiggins ( 349 ) on Tuesday January 09, 2001 @11:39AM (#520896) Homepage
    I fought it for many years, and finally gave in. Now I wonder how I ever got by without it.

    There are applications - small ones - for which OOP is not appropriate. Code which generates web pages, for example, is generately best written in a procedural language. The vast majority of applications, however, are easier to extend and maintain (although not necessarily write, at least until you've been doing it for quite a while) if written with OO.

    And of course, you don't need an OO language to use OO. The Linux kernel is very OO, and it's written in C.

    FWIW, the examples given in this article are terrible. Not only are they not very relevant, but they are badly written. If someone I was working with wrote OO code like that, I'd seriously question their ability as a programmer.
  • by BlackStar ( 106064 ) on Tuesday January 09, 2001 @02:43PM (#520905) Homepage
    I roundly disagree with the arguments presented against OOP, on the basis that the author's own references cite the advantages of it, as well as the disadvantages. The consensus on the reference *academic paper* is that it had advantages, but is not necessarily "revolutionary". Indeed, OOP codifies much of what *can* be done with good programming practice in procedural or even functional programming.

    My experience in systems design is that OOP shows benefit when you know it, and with larger *REAL* systems. Networked apps, distributed apps, and yes, even large business apps. I've been involved in an IS project written completely in C, using good, structured programming techniques. At a different company, we were building a similar application, using C but applying a more data-centric approach, close to OOP but not there yet. It was much quicker to build, and easier to enhance. If I took a crack at the domain yet again with OOP and C++/Java, I know it could be improved yet again.

    The issue though, is that if it's one guy, building a one-shot fire and forget app, don't bother. Code it however you want. If you have a significant project with multiple team members, OOP acts as a common framework and a basis for communication. It draws some borders and sets some guidelines and methods to give a common frame of reference. It's possible, but more difficult to do that with procedural code in *most* cases.

    Obviously though, the proof, as the author says, is in the pudding. Millions upon millions of development dollars is OBVIOUSLY being wasted by the industry as it moves to use OOP methodologies and tools. Not just as dictated by PHBs, but also as dictated in start-ups by lead programmers.

    Quote from a pundit who knows the industry better than most of us, Robert Cringely, from a slashdot interview no less:

    Something else that has changed a lot is how software is written. OOP has paid off more than we even know, so there are a lot of chances to make businesses out of selling cogs that fit into other people's machines. Your driver question, for example, wouldn't have even made sense a decade ago.

    Hmm. More than one company now. There's a benefit. Predefined interfaces and component technologies. Can you do that in procedural approaches, yes. Is it easier and more natural and efficient in OOP, yes.

    And finally, as I don't want to get too long, let's take another /. post, a reply from John Carmack.

    First of all, the fact that none of our deployed games used OOP does not mean that I don't think there are benefits to it. In the early days, there were portability, performance, and bloat problems with it, so we never chose to deploy with it. However, all of the tools for DOOM and Quake were developed in Obejctive-C on NEXTSTEP. Q3 also came fairly close to have a JVM instead of the QVM interpreter, but it didn't quite fit my needs. I'm still not a huge C++ fan, but anyone that will flat out deny the benefits of OOP for some specific classes of problems like UI programming and some game logic is bordering on being a luddite. I still don't think it is beneficial everywhere, but, if anything, I think we are behind the curve on making the transition.

    Hmm. Use the best fitting tool. But even a die-hard success phenomenon like John Carmack seems to think OOP is good for things link UI AND game logic. Not everywhere. No argument from me.

    Jeez, I think he essentially can be construed as having called the author of the original post a luddite.

    I tend to agree.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...