FRIHOST • FORUMS • SEARCH • FAQ • TOS • BLOGS • COMPETITIONS
You are invited to Log in or Register a free Frihost Account!


"PHP Vulnerability May Halt Millions of Servers"





ankurthakur
Article : http://www.phpclasses.org/blog/post/171-PHP-Vulnerability-May-Halt-Millions-of-Servers.html

Hello Guyz,

So I just read this article and I think that this thing may affect server performance in some cases.

However, its an old issue which is reborn-ed.

But I am eager to know that will it affect Shared hosting much more ?

Also, to note that we are using older version of PHP.

Regards,
Ankur Thakur

EDITED :

There is a Plugin provided for older version known as Suhosin.
Peterssidan
I don't think you have to worry about performance being affected just by chance. The problem is with intentional attacks.

They say "this happens even before any PHP code starts being executed", does that mean the script will not time out as normal php script usually do?
Fire Boar
Hm, interesting. This exploit basically involves turning a typically O(n) operation into an O(nē) operation, where n is the number of HTTP request variables. That can get pretty hairy when n is very large, and it's quite likely that the timeout system will only come into effect after the PHP script is started. Shared hosts are perhaps most at risk, because the larger number of sites is a larger number of targets.
codersfriend
I think the vulnerability issue depends on the server or how the code is written
Fire Boar
codersfriend wrote:
I think the vulnerability issue depends on the server or how the code is written


You think wrong. Please read the article first - it tells you exactly what the problem is. The attack is on the PHP module itself, therefore ANY server running ANY code is vulnerable if they are using a PHP setup which does not have the ability to limit the number of incoming requests.

The vulnerability isn't actually anything wrong with PHP itself, it's actually a collision attack on the hash function used by whatever C hash table implementation PHP uses.
jmraker
Is there a proof of concept on this? Something like

Code:

<?php
$a = array();
$a['abcd'] = 'abcd';
$a['xyz'] = 'xyz';  // Warning: 'xyz' has a slight chance of having the same hash value as 'abcd'

var_dump($a);
?>


Code:
array(1) {
  ["abc"]=>
  string(3) "xyz"
}


and then proof that the whole apache web server comes crashing down when it happens

I can see how appending billions of entries to an array would eventually collide with another and the insert of the data to the large array becoming slower, I don't see how it's the hash collisions fault.

(I'm sure the hash function results could differ between versions or compiliations to make such an example work)
kacsababa
The problem is not when hash values collide, the problem is when there is a large number of entries and the computer has to compare the new hash value to every existing hash value. Basicly by simply creating too large hash tables and the apache process (or computer) hangs and crashes. And this is a secuirty vulnerability because an attacker can do this by simply making requests with a large number of request variables.
ankurthakur
kacsababa wrote:
The problem is not when hash values collide, the problem is when there is a large number of entries and the computer has to compare the new hash value to every existing hash value. Basicly by simply creating too large hash tables and the apache process (or computer) hangs and crashes. And this is a secuirty vulnerability because an attacker can do this by simply making requests with a large number of request variables.


I think that you're right... Because the Article says :
Quote:
Once the hash table code determines into which linked list the new entry belongs, it determines if there is already an entry with the same array key in that linked list. If there is no entry with the same key value, the new array entry value is added to the linked list. Otherwise, the new entry value will replace the old entry with the same key.

This is a process that it is reasonably fast if the number of entries in the array is relatively small. However, if the array has a very large number of entries the performance of inserting new entries starts degrading.
Fire Boar
No no no no. Listen to a guy who's done courses in computer security and data structures, and who knows exactly how hash tables work (me). Because I am going to explain exactly how it works, hopefully in a way that is understandable.

A hash table is a great data structure. It basically consists of a large number of linked lists, called "buckets". The idea is to perform a one-way function called a hash function, which takes candidate data as input and gives a number as output. We take that number modulo the number of buckets, and use the result to decide which bucket to put the data in.

For example, suppose you want to add "Hello" to an empty hash table with 16 buckets, using the hash function "number each letter, and add them together". H is the 8th letter in the alphabet, E is the 5th and so on, so the hash value of "Hello" would be 8 + 5 + 12 + 12 + 15 = 52. Since there are 16 buckets (numbered from 0 to 15), we divide 52 by 16 and get the remainder: 4. "Hello" is therefore added to bucket number 4.

The idea is that the hash function should be fast, and should distribute its outputs roughly evenly, so that the data is assigned to buckets in a sensible way. For this, my hash function in the example above is really awful. There are much better hash functions available. If a good hash function is used, searching for data is really fast: you simply hash the input and then look in the bucket, rather than checking all buckets in turn. More buckets means faster access, at the cost of a very small amount of memory used per bucket.

An important property of hash functions for security reasons is the following: it should be computationally infeasible to find any two different inputs x and y such that hash(x) = hash(y). This property is known as strong collision resistance. As it happens, if there is a method to compute such a pair, there is generally a similar method to find lots of different inputs with the same hash.


So, what does all this have to do with the topic? Well, it seems that the hash function used for the hash table which stores the HTTP request variables has been found not to have this strong collision resistance property. This is problematic because an attacker can easily send lots of different request variables, all of which have data which hashes to the same thing. Two things with the same hash are added to the same bucket, so rather than the hash table's usual hundred or so buckets, all data is crammed into the same one. This reduces the efficient hash table to one not so efficient linked list, which in turn will cause the server to take an excessively long time to process every such malicious request.
ankurthakur
Oh Fire boar... Thanks a lot for this useful explanation...

It really makes everything clear in the mind... Smile
D'Artagnan
there is a lot of FUD on this, sometimes phpclasses disappoint me so much, yes there is a problem that needs to be addressed, but the title is so exaggerated, we have a lot of possibilities to keep our servers safe from that kind of ddos (and yes that is a type of ddos), Lemos is a writter and should stop trying to be a developer or a journalist until he is ready to be serious about that.
Related topics
Site not found
Help Needed Regarding CGI And Php
Tutorial: PHP Installed Modules Dynamic Reference Tool
ebsite Promotion Secret: How To Explode Your Website Sales
Latest PHP install may cause phpBB to return errors...
Running a "Hello World" php script
PHP VS ASP
ASP Help Please?
PHP 5
from wap to web
Want to see a phpinfo page showing php settings?
hosting my website from home????
Error pages
Tiny Tiny RSS suport
Reply to topic    Frihost Forum Index -> Scripting -> Php and MySQL

FRIHOST HOME | FAQ | TOS | ABOUT US | CONTACT US | SITE MAP
© 2005-2011 Frihost, forums powered by phpBB.