Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

Computer Immune Systems 92

LL writes "We might soon be seeing commercial delivery of autoimmune security systems. Rather than the surface bit pattern detections of antivirus checkers, these system attempt to provoke virii in a secure area (IBM) or match network packets against signature tags (Forrest). The interesting plug is that the author suggests that large programs such as operating systems should be made in such a way that no two copies are exactly alike. Now guess what favourite beast has this trait?"
This discussion has been archived. No new comments can be posted.

Computer Immune System

Comments Filter:
  • by Anonymous Coward
    That every operating system could use this technology to _automatically_ detect and analyze the security holes that all systems have.

    Security holes are found in Linux distributions all the time.

    This would allow early detection of problems like wuftp allowing root access. This early detection and the automatic downloading of the patches would allow system administrators to fix security holes before they became problems.

  • by Anonymous Coward
    The problem with this is that at their basest microbes and viruses are flexible biological molecules with no symbolic meanings.
    Computer code (computer viruses) are binary characters and commands with specific and important symbolic meanings. To deal with sexual reproduction, one would have to deal with recombination of reproductive data - splicing, crossing over, etc. The problem is that code is too fragile! You have a protein mutation/genetic mutation, chances are it won't really adversely affect an organism (hey, most of our DNA is garbage introns that can afford to be corrupted), get one character or bit wrong in a virus's code and BAM! a non-working virus. IMHO, at this point in research computer code is too fragile/symbolically dependant to be treated like chemical molecules.

    I forget which book it was mentioned in (either Richard Dawkins or Stuart Kaufmann), but they mentioned this criticality at which point systems can no longer withstand point changes without catastrophic failure.

    Respectfully,
    Kevin Christie
    kwchri@wm.edu
  • by Anonymous Coward
    Except it wouldn't. What's a good survival strategy for a virus? Not to be detected, of course. What's a good way not to be detected? Don't do any noticable harm.

    Er, no. The survival strategy for all living things is to reproduce. (survival of the genes [code] is what governs an evolutionary proccess, not survival of the individual.) The way that biological viruses spread is by taking over cells and telling them "stop what you're doing and make me!" When your cells become to busy making a virus to do those nice cell things like respirating and keeping you alive, you die.

    Computer viruses, if exposed to evolutionary pressures rather than being designed, would likely do the same - reproduce. The code would insert itself into a process and say "stop what you're doing and send out lots of copies of me to any other process you can!" When too many processes are busy making virus code instead of doing their job, you get a sick computer, even if the virus is "harmless" in terms of not being intended to make your computer crash.

    Interestingly, automated immune systems for computers might have the same effect as our immune system does - making us feel sick. Most of our feeling of illness when we have low grade viruses like colds is not due to anything the virus is doing to us yet but the loss of energy to fighting it and side effects of our immune response, such as fevers. So with one of these systems, you would get a slow down of sorts from the extra processing of the anti-virus software even though the virus might not (at that level) be causing any problems.

    Just some thoughts from someone who knows biology.

    -Kahuna Burger (can't remember my password at work.)

  • Oh hush, AC.
    I think virii has been pretty much accepted as a word, and as Mark Twain said "I have no respect for a man who can only spell a word one way."

    ~Chris Carlin
  • Not at all, I remember reading about virii years before I first heard the word hax0r and all. "Virii" is a well established word arising from a mispluralization of "virus", but hax0r is a purposeful perversion on a word. It's two completely different things.


    Chris Carlin

  • There is a problem with the notion that you can develop an all-encompasing defence. It's called evolution. The virii that are no longer effective will be selected out, leaving those which will be impervious to this kind of defence.

    As the viable attacks will be the ones which survive, those will be the ones distributed, copied and reused. Within a given timeframe, by creating a "super-defence", you -ALSO- create "super-virii".

    The problem with any evolving system is that it will remain, over a long enough time-frame, roughly in balance. Nothing can become super-strong, without in turn strengthening it's opponents, by natural selection.

    Only a "truly perfect" defence will work, but no such defence exists, or even theoretically could exist. This leaves you with the "best practical" approach, which is to make things as protected as reasonably practical, and no more.

    This kind of approach has the advantage that you don't accelerate (too much) the development of super-bugs (as medical practices have an unfortunate tendancy to do - idiots!) whilst offering a sensible level of protection against more common attackers.

    Ideally, though, defences should do more than just defend. The more time you spend defending, the less time you have to do anything else. This, in itself, is a form of DoS attack on your system, via wetware rather than software, making the admins install so much protection that the system becomes unstable and/or unusable, under typical loads.

    What you want is a form of defence which actually contributes to the rest of the system in other ways. That way, you are gaining overall by expending the resources, and don't run into the DoS trap.

  • Besides being flamebait and a troll, you're also wrong. :) A word is a purposely organised assembly of characters, which includes anything written down and deliberately spelled a particular way.

    (This includes coined words, jargon, local dialects & local terms, regional spelling, national spelling, etc, ad nausium.)

    On top of that, I believe "virus" has a Latin root, which makes the plural "virii". This is distinct from a word such as "data", which is a plural and who's singular is datum.

    Oh, and "rap" ain't music. It's noise with speech trying to drown it out in the folorn hope nobody'll notice how cruddy it is.

  • by axolotl ( 1659 ) on Friday January 07, 2000 @04:39AM (#1395281) Homepage
    If you're talking about linux, why should it have this trait? Most people still use a stock kernel from their distribution. That's a lot of people using the Redhat binary kernel, say, which will be identical for every person using it.

    Sure, you can play with the config or use patches or whatever, but a lot of the code will come out the same. It's not like the compiler puts some kind of unique fingerprint on the kernel you build.

    axolotl
  • "Please fasten your seatbelts, as we are presently experiencing turbulence as the result of excessive metaphor shear."

    As much as I would absolutely love to fully envision the Net as a living, breathing organism...it isn't. There are aspects of biology that are appropriate, but I think it's fair to say that these researchers are presuming excessive organic/technical equivalence:

    Technology is externally changed, quickly, and often within the same generation of machinery. Organics internally evolve, extremely slowly, and even then almost wholly reserve their changes for the next generation.

    The fact that technology is externally changed means that there's no evolved internal consistency--the immune system must be explictly modified to support the new transplant. As biology and technology have shown us, spooging the new into the old is difficult work. The speed of modifications too is frightening--while it's obvious that the host systems change much faster in a technological environment, I'd be interested in knowing the genetic variation of attacking bacteria and virii vs. the command variation of attacking trojans and computer viruses.

    The generational woes are the killer--it is impossible to establish the biological concept of a "homeostatic self" onto systems that never stay either frozen in the present or predictable in their growth towards any degree of future.

    Now, granted: There are assuredly "all quiet" states on the average network, and recognizing such states is a common tactic of network monitoring systems. (Indeed, there's a free app out there that will generate a firewall config that will pass any traffic it noted on your network during a "trusted state" period, then block anything else.) But that's a rather blunt methodology, and denies the inevitable existance of new services. The big problem is: How does one respond to a deviation? The curse of unpredictability is the inability to automate appropriate responses. The curse of being forced to constantly formulate appropriate responses is that it's burdensome and prone to false positives. The curse of not formulating appropriate responses is that you end up not responding at all ;-) All in all, a nasty situation.

    I should be fair--I like what I'm hearing from these guys. I've been saying for quite a while that systems that prevent the results of an instability from being necessarily exploitable(essentially, randomizing and shuffling systems so that there is no predictable "skeleton key" to the system that works every time). Their talk about monocultures is perfectly appropriate here. IBMs work with victim labs is beautiful, if not more than a bit macabre if backwards ported to human biology. Even the packet signaturing is interesting. But we should be aware of the limitations of this technology, and I'm interested in just how aware these researchers are of the differences between the evolved and the created.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • You'd think after the first 17 postings about that not being word, people would catch on. Guess not.

    Gotta love the English language. Unlike, say, Spanish or French, there is no central committee which decides which words are valid and which ones aren't. While dictionaries and Trusted Newspapers take some of the responsiblity, the general rule is rather democratic: If enough individuals use a given word to represent a consistent concept, and if that word is not a homonym of a word with a slightly different(and more standardized) spelling(their/thier/there), that word is considered coined and valid.

    Remember, it is not the purpose of a dictionary to create the language, only to reflect it.

    Altavista shows 8,496 usages of the unique word "virii". At bare mininum, "virii" qualifies as an alternative, non misspelled variant of the word "viruses".

    Don't play semantic games with me, AC ;-)

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • Vir[ii|uses] are a problem in the Windows world due to a lack of system security, plain and simple. While it is theoretically possible to write a Unix/Linux virus (and has been done), how will it *spread*?

    For a classic virus to work, it must attach itself to an executable, and spreads when that executable is run (modern email "virus" programs are often technically worms, not vir[ii|uses]). In Windows, this is easy, because the system directories (c:\Windows) are writable by the regular users.

    In Unix/Linux, the system directories where most binaries are (/usr/bin, /usr/lib, etc) are not writable by non-root users. If you don't run as root, a virus can't infect the binaries, because it can't write to them. Period.

    If one were to write a Unix/Linux virus, the obvious target program would be /bin/sh (or /bin/bash, etc). Infect this, and you can easily infect everything which is executed by /bin/sh, which is most programs. But how can an ordinary user attach the virus to /bin/sh? On the various Linux and commercial Unix boxen here at work, it is always owned by root/root or bin/bin, and mode 755 or 555 - unwritable by ordinary users.

    At best, a virus could affect user-owned binaries, say in ~/bin. But except for convenience scripts, who uses that? Anything widely used and standard goes into a directory protected from accidental or deliberate damage. That's just good practice.

    If all operating systems followed Unix' wise example, vir[ii|uses] would be merely an interesting theoretical exercise, rather than a serious hazard.

    ---
    120
    chars is barely sufficient
  • I have to disagree. Look around you. Life doesn't reproduce at the expense of all else; it goes until a balance is obtained with other living things. However, sometimes things get out of hand and you get exhaustion of natural resources and the species dies back some or entirely.

    I suspect that once computer viruses start exhibiting evolutionary-like behavior they will behave just like their biological cousins; sometimes reproducing at a frenetic pace and crippling and destroying everything in their wake, and lots of dormant viruses stuck on the wrong sort of OS, or small viruses that reproduce at low rates and aren't malicious.

    Just like life; variety. :)
  • I expect any time now to hear that someone has introduced a virus that evolves in the sense that a genetic algorithm evolves a
    solution to a problem. The internet is large enough an e-ecosystem to support millions of copies of a virus, so even if the survival
    rate of the variants produced by breeding and mutation was very low, there might be enough survivors each generation to evolve
    into a truly dangerous virus.


    Except it wouldn't. What's a good survival strategy for a virus? Not to be detected, of course. What's a good way not to be detected? Don't do any noticable harm.

    An evolving virus (if it survived at all, real-world systems are rather brittle for a-life organisms to survive in the wild) would very quickly become very small, very prolific, and completely harmless.

  • Wrong!

    alt.comp.virus FAQ [wisc.edu]

    There's another page with some serious analysis of the Latin words

    right here [perl.com]
  • Normally, when anti-virus software installed on a personal computer (PC) detects a suspected but unknown virus that it cannot handle, it sounds an alarm and waits for human operators to fix the problem
    Umm, current anti-virus software does not detect suspected but unknown viruses (virii?). It's either known, named, blocked and dealt with or it's normal user code and ignored. You can get false positives and you can definately get false negatives, but there's no grey area. How can a computer possibly tell the difference between something a valid program has asked it to do and a virus without simply pattern matching?

    The human immune system knows what it should find and anything else is an invader. Computers aren't like this, they change all the time - installing programs, writing files. You can't just, I don't know, look for a different electron on the hard disk?

    The only thing I can come up with is that the anti-virus package CRCs every non-data/document file as it hits the hard drive, then if the file is modified I guess it might have a virus on it's hands (or it could just be a valid patch). But in that instance, it would be better for all the base systems in a network to be identical, rather than each one being slightly different - that way you could recognise a difference in one as a potential virus...

  • These things always remind me of a unusual consequense of the Godel Incompleteness theorem
    (which is a proof that you can't prove everything).

    The consequence is essentially that any sufficiently powerful computer system cannot be made virus/cracker proof. No matter how good your
    AV software and how tight your security procedures, unless you limit the power of your machine (not how fast it runs, but what sorts of things can it do) you cannot ensure your security.

    I've decided that it's really not worth the bother to run a totally secure system, and I don't even run a virus scanner anymore.

    Before everybody jumps on my back, I'd better clarify "sufficiently powerful". You could say that a machine that is stored in a locked room w/out any connection to any external network that requires a swipecard and 128=byte password to access has perfect security. But, such a machine is not "sufficiently powerful" to be crackable. It is less powerful than my pokey 'ol 486, because my 486 can connect to the internet. If I wanted, I could set it up as a web server. But a machine in a locked room can't do this, and therefore is less powerful.

    Even if you have an internet connection, if it refuses all external connections and is behind a good firewall, it may be impossible to break into. But again, it is less powerful than any web-server, even one that just displays static pages. It is once you cross a certain threshold of useability that it becomes "sufficiently powerful". If you have an open telnet port and 1 user account, it is probably "sufficiently powerful". That's not a very high threshold.

    The threshold for virii is even lower than cracking. If I want to run outside software, I have to expose myself to virii. If I have good AV software installed and running, I may be able to detect all virii. But, then I can't run any programs that appear sufficiently virus-like because the AV software will flag it, and Godel's Incompleteness theorem shows that if my software catches all virii, it MUST catch some non-virii.

    So, security is an impossible goal.

    It's still pretty cool to have AV software that automatically looks for 'new' virii though.
  • I think your observation that these are issues of time scales is very important, and it's at the heart of why I believe that the computer virus/biological virus analogy is flawed.

    The immune system was successful initially because it could very quickly generate new defense mechanisms that pathogens would take some time to adapt to through evolutionary mechanisms.

    Even so, after many millions of years of evolution, there are now numerous pathogens that simply aren't touched by the immune system at all; the only reason why those pathogens haven't wiped us out is because natural pathogens don't have malicious intent, and most of them have co-evolved to co-exist with us.

    When it comes to computer viruses, the insight to be concerned about is the insight of the virus writer. Unlike the biological world, where pathogens need to spend millions of years of evolution to figure out general mechanisms for avoiding the immune system, a virus writer can come up with a general purpose strategy for evading a "computer immune system" within days.

    If you want secure systems, in a world of human adversaries, the only way to build them is so that they are structurally secure or cryptographically secure, and those are engineering problems that are very different from what biological systems have faced until now.

    (As an aside, the next step of evolution of biological pathogens may be interesting. The immune system got us quite far, but it is growing old as a defense mechanism as pathogens have found general purpose ways of evading it. Perhaps its successor is our brain, as we design drugs and treatments rationally. It will be interesting to see how the pathogens will respond.)

  • by jetson123 ( 13128 ) on Friday January 07, 2000 @09:24AM (#1395291)
    There are some broad analogies between biology, ecology, and computer systems. For example, "monocultures" are susceptible to "viruses". And many "viruses" can be detected effectively by tracking the appearance of "fragments" (of code or proteins) and correlating that with computer system damage. But, biology or not, those are ideas that any good engineer should come up with anyway.

    Perhaps the biggest point of departure is that biological systems are evolutionary, while computer systems are designed by humans, with knowledge of the possible countermeasures. That means that many immune system strategies just won't translate.

    But even more important is perhaps the observation that most biological systems (even plants and most animals) don't even have immune systems. They rely on other mechanisms for their defense, mechanisms that many engineers would probably consider "good engineering": make it hard for the viruses to get in, destroy viruses that do get in, minimize the effects of infection if it does occur, stop the spread of infection with various barriers, and have lots of redundancy. The evolutionary pressures for some animals to develop immune systems probably simply don't exist for computer systems.

    So, if you want to push the biology analogy, it may well be better to do without an immune system and to simply design good, strong systems.

  • That's what you're suggesting, right? An anti-virus system which goes after valid code?
    Interesting. So if you have one of these AV systems in place, and apply a binary patch to some code (a'la Id DOOM patches), your changes will get clobbered. Makes sense, and I can see why it would - the checksums and size changed after all. But what you're saying is that this AV system could one day decide (or be prodded into) going after stable, unmodified code - having seen it as infected?

    As for CyberAIDS, I recall something from circa MS-DOS 5.0/6.0. I'd heard of a virus, aptly named CyberAIDS, which would do nothin more than disable your antivirus software. I don't know specifics, but it was interesting to me that it would trash NortonAV, CentralPoint, whatever, leaving you wide open to conventional bugs. I think (IIRC) that it would leave the TRS running, but disabled. Cold.
  • by jabber ( 13196 ) on Friday January 07, 2000 @09:10AM (#1395293) Homepage
    I know it's OT, but I thought it was cool..

    A few years ago, Wired (before they lost their edge) ran a pseudo-retrospect issue from the future, in which they reviewed the turn of the millenium from a few decades ahead. It was a prety neat diversion. Anyhoo...

    One of the main articles dealt with 'The Plague', a super-flu/AIDS/Ebola mutation that threatened to wipe out humanity. (It's striking how biologically apropos the computer virus analogy is, and how well it tracks with real life problems, solutions and latest computer development) The article was written in retrospect, like the whole issue, and in the form of interview with one of the top researchers involved in stopping the disease.

    The truly neat thing about the story, and what keeps me remembering it, was that the disease was cracked not by medically traditional means but by a mathematician who found a way of attacking the geometric form of the virus. I don't know how unconventional this approach is in virology, but the cross-polination of medicine and math really struck me.

    I'm a very strong believer in gestalt thinking, and in the fact that laws of nature from one field map remarkably well onto seemingly unrelated fields. Take Newton's Laws of Motion, abstract a bit and apply to sociology. Action-reaction. The Law of Entropy seems to hold true when placed in the context of politics. :) Somehow it all ties in to Asimov's Psychohistory too.

    This is why the article resonated with me, and why the topic of evolving virii triggered me to go OT about memetic cross-breeding.
  • Might it be more likely for a virus to grow if it focused not on making copies within a system, but if it focused on spreading itself?

    Perhaps scan the filesystem for email addresses frequently sent to, and send melissa-style mailings to them? Maybe search for common email programs, and infect them?
  • Sort of...
    Your right about people using the same binarys.
    But it will optomise diffrently if optomised for a 486 vs a P2.. Thats not very wide varation.
    Also some people (like myself) recompile to pick what will be drivers what will be in kernel and what won't be supported at all.. That being.. conformed to needs prefrences and hardware.
    Also it dosn't stop with the kernel. Diffrent libarys may be used, even the core libary can be compiled diffrently. The system configuration.. etc.. it changes the defects in the system. A virus might be made to infect Linux but the defect used by the virus can be swapped out or may never have been installed.
  • While I'm glad to see this "news" hit Slashdot, I have to wonder why it wasn't considered newsworthy back in July. Check out the old news [sciencenews.org] at sciencenews.org.
  • There is a problem ... It's called evolution.

    Yes, but the problem contains within it its own solution. Viruses evolve. So systems must also evolve. There will never be a perfectly secure system... for long. But neither will the most harmful viruses remain viable for long. Tremendous forces (unstoppable forces?) are quickly mobilized against them. The writers of malicious viruses are clever, but I doubt that they're as clever as the combined cleverness of all those who work to stop malicious viruses from doing their damage.

    Only a "truly perfect" defence will work, but no such defence exists, or even theoretically could exist. This leaves you with the "best practical" approach, which is to make things as protected as reasonably practical, and no more.

    Viruses, as they evolve, can be expected to arrive at the "most practical" approach, rather than the most damaging. Over time, this would lead to the evolution of stealthy viruses that do little or no harm to the systems they infect, use minimal resources, and may even offer some benefit (f'rinstance cool graphics, greater efficiency, protection against other viruses). A "most practical" virus-proofing scheme would not waste its time with these benign viruses, which would drive the evolution of ever more benign viruses.
  • by Black Parrot ( 19622 ) on Friday January 07, 2000 @04:45AM (#1395298)
    I expect any time now to hear that someone has introduced a virus that evolves in the sense that a genetic algorithm evolves a solution to a problem. The internet is large enough an e-ecosystem to support millions of copies of a virus, so even if the survival rate of the variants produced by breeding and mutation was very low, there might be enough survivors each generation to evolve into a truly dangerous virus.

    --
    It's October 6th. Where's W2K? Over the horizon again, eh?
  • I haven't read the white papers (yet), just looked through the article. What is there seem interesting, but hardly earthshattering. This is basically a straightforward application of genetic algorithms to computer security. Matching concatenated sender's address, receiver's address, and the port is really only useful for smallish relatively self-contained networks where any non-regular "outside" connection is automatically suspicious. This wouldn't work at all for an e-commerce site, for example.

    The suggestion that no two operating systems are to be exactly alike is also an interesting one, but hardly practical. First of all, most security holes occur in applications, not operating systems per se. The dangers of monoculture are real, but purposefully avoiding popular software (1) leads to suboptimal solutions to problems (do you want to avoid Apache just because it is the most popular web server?); and (2) strongly smells of security through obscurity. Besides, think of technical support nightmares: does anybody really want to support hundreds and thousands of "slightly different" operating systems?

    I feel that the biological metaphors are somewhat overblown and could be misleading. On the other hand, they journalists like them...

    Kaa
  • Hm, I do think some kind of fingerprint could be created for each compiled kernel

    The question wasn't kernel fingerprinting. Basically, it's the same old argument: if 90% of the world's computers run Windows, then a single flaw in Windows makes 90% of the world's computers vulnerable. As far as I understood, Forrest was arguing for internal differences in operating systems that would confuse a virus, or a root kit. Checksum are irrelevant here.

    Kaa
  • As someone much wiser than I once said:

    "Any significant advance in technology is indistinquishable from magic."


    That someone was Arthur Clark, and I belive the correct quote is "Any sufficiently advanced technology is indistinguishable from magic".

    If you put a caveman in front of an Imac, he's going to insist it's a deity

    Until he finds a heavy blunt object.

    Kaa

  • but can or will it actually work when but to the

    test ?
  • Perhaps the biggest point of departure is that biological systems are evolutionary, while computer systems are designed by humans, with knowledge of the possible countermeasures. That means that many immune system strategies just won't translate.
    The `humans have forethought' notion is often raised against the idea that living systems concepts apply to engineered systems, but in fact it's unclear how often, in the end, such foresight has made a difference. Engineers designing the original PC knew about concepts like protected kernel mode, per-process address spaces, and so forth, but they left them out of the design anyway. In hindsight we see such decisions virtually created the computer virus deluge that immediately followed.

    Why did they do that? Where was the foresight there? Because of the bottom line: There was no immediate need for such measures, and it would've cost money and resources and time-to-market to put them in.

    Responding quickly and cheaply to only the immediate needs is a hallmark of both evolutionary systems and market-driven systems.

    In the big picture, human foresight is often a good deal less important than we usually think it is. My best working hypothesis is that human ingenuity and intentionality essentially only accelerate what is undeniably an evolutionary system.

  • by dkh2 ( 29130 ) <`moc.hctIstiTyMoDyhW' `ta' `2hkd'> on Friday January 07, 2000 @04:40AM (#1395304) Homepage
    I was going to moderate the first 3 posts down another notch but I actually had a thought on this topic. I guess somebody else will have to wield the moderation on this one.

    So the idea is to increase security in a number of ways including (but not limited to) having each copy of the OS be unique, and having the AV package put the subject in a box and taunt it. (For those of you who haven't seen it, now's a good time to watch that Monty Python "Holy Grail" movie.)

    So how strong are the odds that such methods could inadvertently result in some sort of computer auto-immune disorder? Could our anti-virals manage to interpret the kernel as a virulent entity to be removed? Or, are we all just too smart (or lucky) for that to happen?
    "Una piccola canzone, un piccolo ballo, poco seltzer giù i vostri pantaloni."

  • Artificial means 'made by human hands'; it is cognate to artifice. It has aquired a negative connotation over the years as artificial flavours and products have been created, but it still retains some of its old splendour.

    You make a good point, though: is an AI an intelligence? If it is, then 'artificial intelligence' is the appropriate term. OTOH, if it is not, if it is merely a program which aids a human (even in the absence of said human), then it is more properly called an 'automated intelligence,' as you point out.

    The one is the strong AI position, the other the weak AI position. Having just spent a semester working on AI, I must say that I consider the strong AI position bollocks, for all sorts of philosophical, mathematical and practical reasons.

    Perhaps I will start calling it 'automated intelligence.'

  • Look at Ebola, Marburg, and related virii. They don't pussyfoot around, or stop breeding when the host becomes unstable. They go full bore as long as they can as fast as they can. That's why you never hear of a serious outbreak - the infected die faster than the virus can spread. Hardly ruthless, though. Just reproduction at a very basic level... Can you imagine what a computer virus with the dormancy and lethality od HIV? That'd be scary.

    itachi
  • by Robert Link ( 42853 ) on Friday January 07, 2000 @06:30AM (#1395307) Homepage
    If the point you are trying to make is that nothing is foolproof, I imagine the antivirus researchers would agree with you. I would be very surprised if they thought they could put together a system that would solve the virus problem once and for all. If they did, they probably wouldn't be using biological immune systems as a prototype, since biological immune systems are far from foolproof. After all, we still get sick from time to time. Sometimes we even die of disease; in fact, without outside intervention in the form of modern medicines and treatments we would die of disease a lot more often than we probably care to think about.


    However, like our bodily immune systems, these systems could serve as a first line of defense. Their advantage lies not so much in that they are universal proof against infection (they aren't), but in that against "routine" infections they shut the virus down before it has the opportunity to do any real damage, far faster than would be possible if human intervention were required. Inevitably, some infections will slip through (just as with biological immune systems), and when that happens you need outside intervention; i.e., the computer equivalent of a trip to the doctor's office.


    -r

  • To be honest, the Unix security model is almost as weak as the Windows security model in this aspect. If I ever write "my" virus, it would either infect user-writable executables or simply ~/.bashrc to make sure it gets started. This is a simple solution to make the virus resident even after a restart. Then of course somehow the virus has to get root access, but that is OS specific, it could either be done like the Internet Worm by manually cracking the passwords (the virus has time) or simply by mail-spreading itself. A "click here to see my new Linux demo (attached)" is still a good way to attract new breeding grounds...
  • by Hard_Code ( 49548 ) on Friday January 07, 2000 @06:10AM (#1395309)
    Polymorphic and 'mating' viruses have been around for quite a while. Polymorphic viruses adding random code to their source to 'shape-shift' and attempt to avoid any signature identification (basically sprinkling noops randomly around).

    Some viruses are actually pairs of viruses, which, when they find each other (both infect the same file or piece of memory, etc.), will join and/or manifest some new behavior (start their payload).

    Very interesting stuff actually. It's too bad that malicious virus writers have tainted the whole topic. Self-replicating, autonomous programs are very interesting.

    Jazilla.org - the Java Mozilla [sourceforge.net]
  • homonym n : two words are homonyms if they are pronounced or spelled the same way but have different meanings.
    dictionary.com [dictionary.com]


    Now, IANALS (I am not a linguistics scholar), but isn't virus(the computer term) a homonym for virus(the biology term) in the same way that bark(the tree skin) is a homonym for bark(the sound a dog makes)?

    If this is true, Virus(computer) is most likely an English word, and no official linguistic rules have been made for it.

    The beauty of the English language is that we are free to modify it to suit our needs. It's adaptable, and if we feel like spelling the plural of virus, virii, viruses or vira, it should be accepted.

    The way I see it, in biology, it's unlikely to see one viral cell. Virus seems like it would be plural already. I'm probably totally wrong in this paragraph.

    I've read the articles you point to, and understand them. This is definately not meant as a flame, but aren't there more important things to worry about than how we spell the plural of virus?

  • The suggestion that no two operating systems are to be exactly alike is also an interesting one, but hardly practical. First of all, most security holes occur in applications, not operating systems per se

    *AHEM* Windows? :)

    Maybe not in the *nix (or bsd :) world but look at who is most likely to get a virus/trojan. People on windows (most likely using AOL).
  • Virus is a Latin word. Both plurals are used, viruses is more common, but in scientific circles virii is used. It is one of those things like formulas formulae. Why does this always come up? And why is it that when it does come up, we're always afflicted with a spate of paradiorthosis? Sigh. The only thing more annoying than a correction is a mistaken one.

    I implore you, Mr Penguin, to read this FMTEYEWTK [perl.com] on the matter. Latin just didn't work the way you claim that it did, and neither does English.

  • secondly, even if it is Latin, "virii" is not a correct Latin pluralization! ("Viri" would be.)
    It's really much more complicated [perl.com] than that. Here's the short version of that long one.

    Not all nouns that ended in -us became -i in the nominative plural. Only second declension masculine nouns did so. There are several (I can think of three) other flavors of -us nouns, none of which follows that rule.

    1. 2nd declension "irregulars", which were either full-time or part-time neuters and often of Greek descent, such as pelagus/*, vulgus/*, and the interesting case (as it were :-) of cêtus/cêtê.
    2. Nouns from the 3rd declension, like corpus/corpora, genus/genera, and tempus/tempora.
    3. Nouns from the 4th declension, like status/statûs, apparatus/apparatûs, and prospectus/prospectûs.

    So virus fails to follow the focus/foci rule for at least three different reasons:

    1. Virus was not masculine, but neuter.
    2. Virus was not a count noun, but a mass noun, like vulgus, which was also (usually) neuter.
    3. Virus probably wasn't even from the 2nd declension, but from the 4th declension.
  • I believe "virus" has a Latin root, which makes the plural "virii". This is distinct from a word such as "data", which is a plural and who's singular is datum.
    Yes, virus was in Latin, whence it derived from the Greek digamma - iota - omicron - sigma (sorry, don't have a Greek font). That's not the issue, however.

    I can see you haven't read the other postings here lately. You see, your simplified view really was not how Latin worked. Here's the short story [slashdot.org] from today, and here's the long one [perl.com] from some time ago. Thank goodness we don't have to remember all those rules in English!

    I find it painfully but amusingly ironic that you should have used who's improperly in the cited passage above. You need the relative pronoun to be in the genitive case--to wit, whose. I believe this falls under the category of throwing stones in glass houses. :-)

  • To be honest, the Unix security model is almost as weak as the Windows security model in this aspect.
    What you've said is largely irrelevant. Here's something I once wrote on this matter. You can change the Perl references to Bash, to accord with your own statement. I wish I'd saved the links to Abigail's virus. Check DejaNews.

    --tom

    _______________________________

    No, it's really far more complex than that.

    You are correct that it is no mean trick to write a program that can damage the system it runs on, largely irrespective of what kind of system we're talking about. And so long as you can hoodwink some unwitting user into executing that program on their system, that program can, of course, cause damages commensurate with the privileges and capabilities of that user.

    What you've failed to consider is how the dramatic cultural differences between Unix and the much-maligned consumerist toys serve to affect the issue to our benefit and their detriment.

    Probably the most important of these cultural differences is that Unix has historically been a source-only world. Programs are distributed in the form of source code, code which shall be configured, built, and ultimately installed on the target machine. Programs solely accessible in machine language form fall immediately under a taint of mistrust.

    Think back to the last time you read a notice from someone whom you've never heard of before that was asking you to go fetch some random binary program from some random place on the net and then to run that program under full sysadmin privileges? I can already see the incredulous Unix sysadmin reading that and bursting out in uncontrollable guffaws. Because the de facto standard for program interchange in Unix is as source code, a Unix programmer will be far less likely to fall for your ploy than would your average Prisoner of Bill, who has been lulled into gullibility by a binary-only culture.

    But for the sake of the argument, let's say that you've found a way to effect this trick. Suppose you're an employee of some reasonably respected company that happens to produce a binary-only distribution of their commercial software, and you decide to sneak something wicked into the binary image. You manage to replace the standard, clean copy on your company's ftp or http server, or even floppies or CDs, with your own naughty version. People are accustomed to downloading from your company, or using your company's floppies, so they do as they've always done, run the installation as the superuser, and you thereby have your way with their system.

    If this scenario were to play out, just how dangerous--how destructive--could it really prove? Whom could you harm, and who would be immune to your ploy? The answer is that you could only hurt those folks running the exact platform for which your binary had been compiled, and everybody else is unassailable. By platform, I mean the whole feature vector that includes processor chip (eg Sparc vs Intel), operating system (e.g. SGI vs BSD), shared libraries (e.g. libc vs glibc), and site-specific configuration (e.g. shadowed vs non-shadowed password files.

    Let's not get too full of ourselves and pretend that the Unix culture's predilection for source-only program distribution derives only, or even mainly, from altruism. We have no choice in this matter. If you're on Unix, you don't have the source, then you can't run the program on all your diverse systems. And if Unix programmers do not provide source, they cannot hope to have their program as widely used as it would otherwise be.

    Consumer-targetted systems from Microsoft or Apple are two instances are a static monoculture, as vulnerable to mayhap as a field of cloned sweet corn. It only takes one genetically engineered virus to bring down the whole field. Unix is different.

    In his acclaimed essay, In The Beginning, Neal Stephenson writes:

    It is this sort of acculturation that gives Unix hackers their confidence in the system, and the attitude of calm, unshakable, annoying superiority captured in the Dilbert cartoon. Windows 95 and MacOS are products, contrived by engineers in the service of specific companies. Unix, by contrast, is not so much a product as it is a painstakingly compiled oral history of the hacker subculture. It is our Gilgamesh epic.

    What made old epics like Gilgamesh so powerful and so long-lived was that they were living bodies of narrative that many people knew by heart, and told over and over again--making their own personal embellishments whenever it struck their fancy. The bad embellishments were shouted down, the good ones picked up by others, polished, improved, and, over time, incorporated into the story. Likewise, Unix is known, loved, and understood by so many hackers that it can be re-created from scratch whenever someone needs it. This is very difficult to understand for people who are accustomed to thinking of OSes as things that absolutely have to be bought.

    There is no one thing called Unix. Instead, Unix comprises a diverse set of subtly (and often not so subtly) variant platforms. A nefarious binary laced with exquisitely designed evil bullets hidden inside it can hurt only a few of us. When Apple and Microsoft laugh at our diversity, be sure to remind them that is it their lack of the same that contributes to their incredible vulnerability--and to our strength. Hybrid vigor ultimately wins out over a monoculture, for the latter is too in-bred and fragile to prove long viable.

    Let me now return to your particular suggestion, that of a malignant Perl program activated by a Makefile rule at installation time. Because you're talking source code, and because Perl tries rather hard to attain a high level cross-platform intercompatibility, this form of subterfuge would appear exempt from the inherent protections stemming from diversity in variant Unix platforms. So, could your trick be done? How much of a problem could this really be? What might happen?

    The answer is that of course, it could be done. And in point of fact, a demonstration model is already available, courtesy of Abigail. Guess what? There's no reason to run around like a chicken with its head cut off: the sky isn't falling. This sort of approach stands little chance of making a big splash, because you aren't going to insinuate it into a place that can affect a lot of people. Sure, you might catch a few folks, but just how long to you think this kind of thing will go unnoticed? Remember, it's in source code. That means anybody who wonders what happened can just look at it. There's a very low barrier to entry. And even if the naughtiness removes itself from your copy once its dirty deeds are done, that naughtiness is still sitting there in plain view for easy inspection back wherever you got your copy from.

    Is there a way around this? Well, yes, if you're as clever as Ken Thompson. Fortunately, you aren't, and neither are the crackers. If they were, they'd doubtless receive more Turing Awards for their vaunted efforts. :-)

    The only way you're going to get good propagation is if your nastiness into a copy that a lot of people will download and install. There's a very fine reason why so many archives contain a checksum of the image. It's to help with this problem. Security of course depends on several matters, including the strength of the algorithm and the integrity of the authenticating agent. But better that than nothing.

    Let's talk about propagation some more. I assume that the goal is to have a notable impact, which means you need to spread your bad code as widely as possible. A hacked up install script, even if all goes to your liking, just doesn't have a very high rate of reproduction. First of all, how often do how many people install this software? Secondly, how do you plan to trick them into doing so? It's not really much of a challenge to get one person to this, especially if they trust. If that's your goal, maybe you'll succeed. But the risk of being traced and apprehended is high.

    So how come this stuff can spread like wildfire amongst the OS-challenged? Can't whatever mechanism that's used there be used to get at the rest of us, too?

    Over the last few years, a frighteningly frequent conduit of contagion for viral infection on toy systems has been the implicit, automatic execution of code with little or not manual intervention on the part of the box's owner. DOWN THIS PATH LIES MADNESS!. That this can ever, ever happen is as a plain a symptom of complete and total cretinization in the toybox world as you are ever going to see. It's stupid, it's crazy, and it's dangerous. Any programmer who even suggests it needs to go back to flipping hamburgers. Any user who asks for this feature needs to be quietly taken into the back room by the doleful men in long trenchcoats, where he will be told in no uncertain terms that his request is not only in the best interest of no one but criminals, but that he also now has a permanent record even for asking about it.

    No, I don't care that a customer asked for it. Customers are idiots, just like any other user. So what if they pay you? They're still idiots, and it's your professional responsibility to act responsibly, to refuse to go along with their madnesses. The customer is not always right. In fact, they're very often wrong. A physician or a lawyer doesn't do whatever the customer requests, and neither do you. They, meaning the customers or users, simply don't have the background and training; they don't have the experience of seeing why automatic execution from untrustable source is the work of the Devil.

    It's not as though we in Unix have never seen this issue before. In fact, we've seen it time and time again. And guess what? We recognized the problem and we addressed it. And we don't cater to that kind of lunacy anymore.

    Here are a few concrete examples.

    Remember when vi would--or at least, could--automatically execute macro commands embedded in a file in a specific way? That was a dubious feature called modelines. On my OpenBSD systems, if I type :set modeline, the program comes back and says set: the modeline option may never be turned on.

    Another example of learning from our mistakes is the issue of shell archives. Instead of automatically running the sharfile through /bin/sh, there are specially made unshar programs that will do the common things, safely, and nothing else.

    When CGI was first getting big, owners of toy systems would blindly install compilers and interpreters in such a way that these would easily execute arbitrary content coming in off the wire. Despite my pleas, both Netscape and Microsoft were actually advocating this! After a year of warning admins not to do this, and sending mail to the companies who were saying to just go ahead, nothing changed. So I released latro [perl.com]. Then and only then did various companies retract their suggestions, even though they'd been aware of the nature of the problem for a long, long time. Sure, you could be equally stupid on Unix, but for some reason, we weren't. History counts.

    Implicit execution of untrusted material is simply stupid beyond words. And for some reason, the toybox people keep falling for the same chump moves, from MIME attachments to word processor and spreadsheet macros to embedded active scripting controls. I don't know quite why they just keep doing this crap. My hunch, and it's only a hunch, is that this is happening because Microsoft and their moronic minions simply cannot for the all the tea in China ever manage to think outside of their quaint but completely fictional little single-user universe. Maybe they don't hire people who come from a background in multiuser and/or networked computing systems. Maybe they don't hire people with real experience at all, just script-kiddies trying to make a buck legitimately but with no true understanding. Maybe the software makers simply can't say no to a customer request, no matter how suicidal they know that request to be. I don't know.

    Whatever the cause, decades of history are completely and repeatedly ignored. They keep making the same mistakes, and they don't fix the underlying causes. Sure, there are things that are hard. Denial of service attacks are hard. People who know exactly all the ramifications of IP who go sending maliciously hand-crafted packets aren't much fun either.

    But these highly technical ploys aren't why most folks on their toyboxes are being screwed up, down, left, right, and sideways. They're being screwed because of very simple matters. They don't have the notion of a protected execution mode. They don't have file permissions or memory protections. They automatically execute content willy-nilly, often with complete access to the whole machine. They expect a program to show up in binary not source form. They don't compare robust checksums from a strongly authenticated sources. They live in an infinitely vulnerable monoculture. They expect things to just magically happen for them without a thought or a care, and guess what? Their wishes are duly granted, much to their eventual dismay.

    It is possible that mass-market factors may someday end up plaguing Unix systems in ways not so far removed from the stupidities that the toy boxes are riddled with. We just have to tell them no, and to condemn in the strongest and loudest possible terms any backsliding into insecurities that if we ever had, long ago banished. Looking at the Winix phenomenon, in which a dozen different vendors put together and ship their own Linux operating systems, all specifically constructed to be user-obsequious and Unix-hostile all in order to appease the lowered expectations of a hundred million Windows idiots, who, despite their numbes, really can still be wrong. The stupidity of the masses must never be underestimated.

  • I didn't realise that 'nothing is foolproof' was the point I was making. Reading it again, though, I suppose it was -- at least in part.

    Actually, though, I was fascinated by thinking about potential active, practical security implications. At least in one respect of one particular example given in the original article.

    Seems quite strange when put in a broader context, that (to torture it all out a bit) it's amazing that computers on the Web last as long as they do. They typically have 'doctors' (sysadmins) on hand keeping an eye on them, but they don't have their own immune systems. From which it follows pretty reasonably that they're either 'immunised' (setup and patched properly) or tend to get thoroughly compromised as soon as someone finds a hole.

    A topic for another time: This is (generally) about developing software immune systems. How long before corresponding software pathogens and other marauders are developed that meander about the Web of their own volition looking for victims? Beyond the current implementation of viruses, how quickly do we expect a proper, unattended software arms race to creep out across the Web?

  • Is there not still an attack against this? You generate strings by concatenating the source IP, destination IP, and destination port (that's me saying 'IP', it only says 'address' in the article and they may mean something more complex). If randomly generated 'detectors' have more than ~25% contiguous content in common with a passing packet, the detector is ditched, as the traffic is adjudged to be of a routine nature. Then a detector sits there for two days, in which time it's still ditched if it matches something. After that long, you've probably had enough routine traffic to rule out it being a spurious detector that's going to give out a lot of false positivies. So now it gets another five days to see if it matches against any other traffic, which will sound the alarm. If it makes it through five days without doing that, it gets ditched because it's probably esoteric. If it does trigger, it gets kept for good.

    But if you have rough idea what's on the network you're trying to attack, and what hosts are on there, you may well have a good idea of roughly what kind of traffic is going about. If you know what hosts are there and have an idea of what traffic is (probably) there, then why not just bury a false ID somewhere in your packet?

    You could attempt to forge an ID from knowledge of the network, and fool the alarm mechanism by effectively masquerading as normal traffic. This is probably preventable by looking at exactly where the ID occurs in the packet and deciding if that's where it should be.

    Beyond that, though, what's to stop you quietly trickling a normal-looking flow of do-nothing packets through the network to a given port on a given host? Then when a detector is generated, it'll trigger on your harmless packets an get ditched. Then one day you make your packets do something nefarious, and they get overlooked, something like 'friendly fire'.

  • After reading the article I would say that I much perfer Dr. Forrest's approach. It is an internal defense and does not rely on outside resources. I definitely do not like the idea of my system automatically sending and receiving files without my knowledge. It puts the integrity of my system into the hands of this "central" virus authority.

  • Having different binaries doesn't do much good when the API is the same. i.e. buffer overruns, Denial of Service attacks, etc.

    Dr. Dobbs Dec/99 has an article by Bruce Schneier on Attack Trees. For those interested, it discusses one methodology of breaking security.

    Cheers
  • That's what I thought of first as well... The individual linux build came second.

    The big difference I notice between humans and linux is the extent of the differences in individuality. Yes, I can set up a linux machine with a different configuration, but that is a far cry from the extent to which my DNA differs from your DNA. We're not able to (yet) reconfigure ourselves, we are a fixed individual with an individual blueprint. We only can add to deffensive (autoimune) network, gain experience fighting disease if you will...

    Linux configurations (of the same distrabution) all have the *ability* to be identical. Linux machines all stem from one set configuration and only begin to act differently based on external stimulus. There is a finite extent to the changes that can be made.

    As far as evolving operating systems, I will agree that Linux is the closest to that - with the user getting the ability to choose what patches, updates and fixes they wish to rebuild their kernel with. But it is still driven by a person.

    There was an earlier thread about your OS getting updates on its own. This too would only be a limited representation of DNA. The true extent of AI required for a software autoimune system would be one that sees what you use, checks to see where your system is vulnerable or not satisfying your needs, looks to see what patches/fixes/upgrades exist and considers what other problems those cause and performs some limited impact study to see how badly it would affect you and then based on that, grabs the patches and "mutates" itself for your benefit.

    Woah, that's kinda neat when you (or I at least) think about it...

    Anybody got the foggiest idea of how to even start coding that... (well other than #include stdio.h)

  • The whole thing is just silly for many reasons. I knew I had to complain about just one of them, but if I complained about them all, I'd write more than the original article. Luckily, I found something that stands out.

    From the article:

    In Dr Forrest's system, every packet sent across the network is examined by stringing together the address
    of the sender, the address of the receiver, and the "port number" on which they are communicating to make
    a string of 49 binary digits (bits). These strings, the equivalent of naturally occurring molecules in a body,
    are compared to a pool of randomly generated 49-bit strings called detectors, the equivalent of the randomly
    generated lymphocytes. If a detector has more than a certain number (currently 12) of contiguous bits in
    common with a passing packet, the detector is deleted, and a new detector is generated to replace it.
    Detectors that survive for two days without matching any packets on the network are deemed to be
    different enough from legitimate strings to be likely to match only foreign invaders.

    49 bits would hold a single IP address (32 bits), a single TCP or UDP port number (16 bits), and one additional bit. They claim that it's holding two IP addresses and one port. (80 bits). I don't even know what to say about the fact that it's holding only one, and not both port numbers. The article says "stringing together", so they're not generating a hash. I could do a lot of speculation as to what they're really putting in those 49 bits, since the article is obviously not correct, but I won't bother. For all I know, the 49 bits figure could be wrong as well.

    So then, they compare these packets with a pool of random 49-bit numbers ("detectors"). 12 contiguous bits in common, and they throw the detector away. A detector must last for two days against this to be actually used. Let's look for ways to prevent any new detectors from ever being used. First, random chance. If there's enough traffic to make such "advanced" software necessary, every sequence of 12 bits will probably occur over the course of two days. Different port numbers (whether they save source or destination doesn't matter, because there will be traffic in both directions). Different IP addresses on either the remote or local network. An attacker purposely causing this to happen. 4096 consecutive legitimate connections from a machine that allocates its ports sequentially and isn't connecting to any other machines in that time. (SMTP, FTP, and HTTP could easily cause this. IRC could with an auto-reconnecting client that keeps getting disconnected.)

    Let's say a detector manages to get by (maybe their network connection is down for a couple days). Let's see what happens next:


    These strings are then used to detect deviations from the conditions in which they were originally selected.
    If they fail to match any network traffic within another seven days, they are deleted as redundant; but if
    they match more than a certain number of packets they trigger an alarm.

    They don't say what a match is. A full match? That's worthless. They're probably using the same threshold, which leaves the same problems with false alarms.

    Oh well. It's a really cute idea, as long as you don't throw any facts at it.
  • Automating this process:
    1) check bugtrack.
    2) pull the patch down to evaluate.
    3) deploy the security patch.
    Is an interesting proposition if you consider the following example: Joe friendly hacker spends days chugging Mt. Dew and pizza while snifing around another system, sticking all kinds of things into every port he can find. He writes and rewites code to exploit an small and obscure security hole and gets root.

    Joe friendly celebrates his successfull hack with an 'Xtra large meatsa trio' that night, and looks for another network to slip into. Much to his astonishment, everyother network has automagicly deployed a patch to the exploit he so painfully spent days to find, and furthur extortion of that security hole is prevented.

    To quote a military tactic "speed kills".

    By automating the reporting/testing/fixing/deploying process of keeping up with holes, our joe friendly hacker may indeed pull off one or two successfull breaches, but not to many after that.

    This does, however shift the hacking from being directed at a netowrk, to the hacking on the reporting/testing/fixing/deploying system that everyone is using.
    _________________________

  • As someone much wiser than I once said:

    "Any significant advance in technology is indistinquishable from magic."

    If you are shown a card trick, it's 'AI' until you're shown how it's done. If you put a caveman in front of an Imac, he's going to insist it's a deity. Thus, Any AI system (and I may be going out on a limb here by using the term ANY) is also an AI system, untill you read and understand the source code.

    Now understand that automating a mundane decision process is what has made automation (in it's current industrial application) such a productivity booster. Afordabley automating physical processes (robots that weld car frames, robots that paint, ect.) has taken decads to come on-line, and continues to evolve. On this same liniage, Automating a decision process (i.e. automated trading systems) can and will also reap huge productivity rewards.

    I would agree with you that it truly is automation at work here, and there's nothing artificial about it. Programers work long and hard to coax the code into doing what they want it to do.
    _________________________

  • by Money__ ( 87045 ) on Friday January 07, 2000 @05:26AM (#1395324)
    I would first like to say that the above poster is spot on with his comments, and I found his comments facinating. I would, however, like to pose a question to fellow/.ers concerning the terminology around "Artificial".

    IMNSHO, This term is very over used. Any time a system goes live on a network, it's deemed to be somehow "alive" by putting an Artificial in front of it. A good example of this was when IBMs deep blue beat the a grand master at chess (Kasparoff(sp?), it was hyped as a "giant leap forward for Artificial inteligence".

    There's nothing artificial about it. It was the result of many of the greatest programs and chess master toiling for years to pull the project off.

    Its more acurate name would be Automated Intelegance.

    And this 'Artificial Immune System' is also just and automated series of self updating decisions. Taking the human out of the loop doesn't make it artificial, it just makes it more cost effective.
    _________________________

  • by sloth jr ( 88200 ) on Friday January 07, 2000 @04:42AM (#1395325)
    Stephanie has some interesting intrusion detection methods. Rather than looking at signatures of data presented in an attack, her approach analyzes sequences of system calls used and compares those sequences against known "correct" behavior (for that particular program). It's strongly based on the genetic notion of self. Surprisingly good results with few false positives. But don't take my word for it - go to her site and read the white papers!

  • Lingua Latin est roxoris

    Yes, there are some things wich are wrong, But "its" and "it's" are two seperate words, as opposed to your claim that "virii" isnt a word, and therefor should not be used.

    There are many "un-official" words.

    The deknez has you, ulna ek tuln.
  • Umm, current anti-virus software does not detect suspected but unknown viruses

    Some packages will monitor for writes to files (such as system files) that the package feels a binary has no business writing to, and put up a "permit or deny" dialog. This isn't exactly an attempt to detect viruses, but it is an attempt to detect and stop viral reproduction, even for unknown viruses. [No, I am not going to get into the Plural Wars.]

  • Barring polymorphic computer virii, this metaphor of "ecology" is overextended, an artistic exaggeration.

    Put simply, a computer virus is not a living organism in the usual sense. It does not "mutate". (The statisical liklihood of a computer virus evolving from pure chance is far greater than the lifetime of the universe.) It does not reproduce sexually or asexually.

    Moreover, computer operating systems and their virii have not even scratched the surface of the incredible variety and complexity of the immune system of human beings.

    You could probably compare the state of computer virii and AV software today to bacteria methylating their own DNA to protect its own DNA from restriction enzymes that instead attack foreign DNA (read, virus material).

    The best that these AV programs can do today is look for signatures or activity of *known* viruses.

    "Taunting" a virus to trigger in a protected space only works if you know the virus phenotype in the first place.

    Scanning network packets seems to be an expensive and legally tricky proposition, since most virii will be inside binary files, which means you not only have to look for MIME data inside packets, but decode them too, which involves a whole other security issue altogether. And then you will only catch the virus that you have information on, that you already know about.
  • Most people still use a stock kernel from their distribution.

    ...
    ...a lot of the code will come out the same. It's not like the compiler puts some kind of unique fingerprint on the kernel you build.

    I don't think thats what the author means. I think that hes talking about other common components, like web browsers, and email clients, which is what most modern viri exploit.

    At the moment a viri author can make huge assumptions like, its a win32 os with Outlook, and winsock, and use small exploits in each of them to spread the virus.

    The linux kernal may be mostly the same accross most intalls of a popular disribution, but the differences stack up when you consdier all the other permutations of mail client & server and html renderer/http server, java VM, etc, etc, it becomes very hard to create a virus that will work with them all!

    Thad
  • How many "viruses" (at least so far as they've been labelled by the media and industry security advisories) lately have been mere scripts taking advantage of weaknesses in the API? (M$'s MAPI especially!!!!!!)

    Sorry Spidy, but I'd have to say you're OT rant is a little OT or OR'd itself.
  • Check out: http://www.cs.unm.edu/~immsec/ and there is some work being done in the UK on different lines but to the same ends called "Host Defence System".
  • Both plurals are used, viruses is more common, but in scientific circles virii is used.
    Bull. I'm calling you on this one. Firstly, it's not even clear that it is a Latin word (M-W [m-w.com] says it's of Latin origin but gives 1599 as the first usage); secondly, even if it is Latin, "virii" is not a correct Latin pluralization! ("Viri" would be.) The standard correct plural of "virus" in English is "viruses". And you're trying that the most egregiously incorrect one is the one that scientists use? (Not that scientists are infallible or anything!) So put up or shut up - show some proof of this.

    Furthermore, your Japanese seems as odd as your English. "Watakushi" doesn't mean "I" in the Japanese I learned. It's "watashi", and the plural is "watashitachi" - Watashi no namae wa "RFC959" desu; watashitachi wa kohii nominagara - unless you speak some dialect unlike the Tokyo Japanese I learned.

  • Virus is a Latin word.
    Both plurals are used, viruses is more common, but in scientific circles virii is used.
    It is one of those things like formulas formulae.

    If you want to be proper, this also holds true with Japanese words. I am learning Japanese and one of the interesting things is that for most words there is no plural form, so for kimono, kimonos is incorrect. There is a plural for I (I = watakushi, We = Watakushitachi) and you (you = anata, You [plural] = anatagate). Doing it otherwise is like saying yous

    PS: Did anyone know that the singular form of data is datum?

  • That is interesting
    According to the book "Mastering Japanese", I is watakushi. I suppose it is one of those regional things or something. Just like I heard that Japan can be called either Nihon or Nipon.

    Watakushi wa nihongo ga wakarimasu yo, keredomo takusan wakarimasen ne.

    Arigato gozaimasu

  • by Da Penguin ( 122065 ) on Friday January 07, 2000 @05:22AM (#1395335)
    Currently, in the scientific sense, computer viruses reproduce asexually. There is one parent involved and it produces an exact copy. But just like in science this is weak as there is no variation.

    Theoretically it should be possible to create viruses that reproduce sexually. There are two parents involved and the offspring shares traits of both parents. Have data structures similar to chromosomes that hold traits of the virus such as where it is stored, what it does, how it reproduces, its lifetime...

    The viruses would then go around looking for other viruses of the same basic type (species), mix together the chromosomes and create varied offspring. You could even have designated virus breeding grounds.

    In the programming side of this, someone would create the basic structure (species) of a virus and a way to insert traits. Virus writers would then come around and specify the traits they want, and send it out (either to a "friend" or to a possible designated virus breeding ground).

    This would create a new type of virus. One that will eventually become so varied that any in that species can not really be removed easily.

  • ...or software problems like Chrone's disease where the immune subsystem goes wild and attacks everything on your machine.

    I can hardly wait.
  • That would be most possible. Given the ease of actually finding a pattern in compiled software, you could (with some work) get a virus to look a lot (enough) like a crucial part of some software or other. (there are lots of data in a compiled kernel to find similar patterns in, and a virus can be small) in the beginning (If this actually comes to work theyäll have to find ways around it) it should be a fairly common approach to make your virus look like a known program, or a common part of the kernel (like, the ppp interface, or scsi...) thus causing the antivirus to either ignore the virus (it looks like known code) or kill the 'good' code as well (payload) In any case, I think this problem is avoidable with some work, but still might be exploited. (among several others)
  • interesting notion, though it would have some problems with the identifying codes. Like, how would the virus identify eachothers, and not just try to 'rape' your ppp dialer or email client? (yeah, kiddin') the point is, any identifying code built into a virus will likely be used against it in a very short time. sure, it could probably be well spread and already have given payload by the time, but... And, research in this area have already been done, tiny 'programs' in a secure box that live, and copy 'traits' from eachother, blending them for efficiency (they had a task, and combining them) this wasa fore stage to some internet worming research, afaik. it was featured in newsweek some years ago, sorry for not beeing more exact.
  • The suggestion that no two operating systems are to be exactly alike is also an interesting one, but hardly practical.
    Hm, I do think some kind of fingerprint could be created for each compiled kernel, Added and changed to some selected portions of code, this would make checksumming of the programs appear invalid if not having the right 'unopen' key. of course, this might also add some problems in detecting changed binaries(publisher spreading 'pure' binaries.) but would work fine in a OpenSource version. (A database over file checksums, a local key for the computer, something like that)
    so In part, it might actually be practical and do fix some possible virus infections.
    whee, finally managed to html format my post. sorry for the garbled ones before this.... :)
  • This is OT. the subject here is as far as i can remember Viruses/malicious programs. not the basic security. If I dos my machine, I don't suffer permanent internal damage(ie, the email client doesn't eat my web server) but if I get a virus infection this might very well occur.

    Sorry for the OT rant. Moderate us both down.

  • No, And that wasn't really my point(if now I had one :) What I meant was that a irregular/regular difference in the program(s)(including kernel) would add another step towards unique binaries for every computer they were compiled on(or if it could be made to work on binaries, but this would rather kill the point of doing it, since that would be repliciable far easier than a compile from source would)

    And yes, I did understand Forrest's arguments(partly at least) but my rant was rather badly expressed in this matter. Sorry for causing misunderstandings.

    And no, checksumming is not really irrelevant. If each program was evaluated before executed, it would cause an overhead in load time, yes, but would also decrease the probability of running an infected file.
    This would of course require a lot of implemention problems to get secure(or ... close to that at least) but would in general add some protection.
    then again, much binary released software uses this sort of thing for protection against 'cracking', and patches or cracks still appear..

    yet another rant. Continue in private: spider@darkmere.wanfear.com
  • What bothers me with this sort of approach is still not the attack on _my_ box, but what I will recive from the network. This antivirus cluster, how will one know that is not infected in itself? That would be one of the major security holes in this situation. Where to strike best when wanting a major payload cross the net? Yes, there. Make it ship 'antivirus' fixes that strike at some other code, or that are the virus itself. no system is ever secure in a network, and those systems will be the ones with the highest amount of crack attempts around, since the 'price' would be highest if they were cracked. (largest spread of your virus) well, more rambles... boy am I bored at work today. :)

  • Well, people, for one. Elephants, for another. Even penguins. No two penguins are exactly alike.

    Wow. Now there's a good springboard for an extended metaphor. "No two penguins are exactly alike. Run Linux for evolutionary viability."

    Seriously, though, what about evolving operating systems? Wouldn't that make some sense? Software DNA?

  • should have used the preview button. i always ignore the best advice.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...