LGA 1155 CPU

E3-1290 V2                 9749
E3-1280 V2                 9561
E3-1270 V2                 9481
E3-1275 V2                 9344
E3-1240 V2                 9177
E3-1245 V2                 9110
E3-1230 V2                 8852
E3-1290                      8699
E3-1280                      8473
E3-1275                      8348
E3-1270                      8233
E3-1245                      8048
E3-1240                      7972
E3-1230                      7907
E3-1265L V2              7779
E3-1235                     7680
E3-1225 V2                6841
E3-1220 V2                6661
E3-1260L                   6534
E3-1220                     6103
E3-1265L                   6038
E3-1225                     5917

debug linux daemon in remote server

Using GNU screen + gdb utility to debug daemon software in remote machine

 

# Launch the daemon service

/etc/init.d/dnsproxy start

 

# Create a screen session, say dns

screen -dmS dns

 

# Attach to the screen session

screen -r dns

 

# Launch gdb with in the session

gdb

attach xxxx

c

 

# Detach the screen session

Ctrl+A+D

 

 

# Debug code with gdb by Attaching to the screen session second time

screen -r dns

Web development in C

davidmoreno/onion

kore

klone

nxweb

 

fast C HTTP server library comparison & wishlist

Hi,

Trying to choose an embeddable HTTP server library for a project, and
also considering writing my own special-purpose code, I came up with
the following comparison of libonion vs. other C libraries that include
high-performance HTTP support and are currently maintained.

Licenses:

libevhtp+libevent – 3-clause BSD
libmicrohttpd – LGPL 2.1
libonion – Apache 2 (except for some examples) or GPLv2+
mongoose – GPLv2 (and commercial)

Build environment:

libevhtp+libevent – cmake+autotools
libmicrohttpd – autotools
libonion – cmake
mongoose – none (one large file, like SQLite)

Code size (“text” as reported by the size(1) command on the library or
on a tiny sample program if statically linked, on Scientific Linux 6.6
on x86_64):

libevhtp+libevent – ~500 KB, or ~200 KB without unicode.c.o and reg*.c.o
libmicrohttpd – ~100 KB default, ~55 KB with most ./configure –disable-*
libonion – ~100 KB with most ONION_USE_* set to false
mongoose – ~100 KB including JSON-RPC

For the smaller builds of libmicrohttpd and libonion, I kept threads
support enabled, but disabled pretty much everything else that could be
disabled without patching the code.  It looks like libmicrohttpd wins
this test.  Maybe there’s more code in libonion to disable (make into
compile-time options) – I haven’t checked yet.

Built-in JSON support:

libevhtp+libevent – none
libmicrohttpd – none
libonion – JSON builtin, JSON-RPC in Apache 2 licensed example
mongoose – JSON-RPC builtin (simple JSON parser not exported?)

All of this is for current versions on GitHub or in recent release
tarballs as of a few days ago.

Maybe someone else will find this useful.  I’d appreciate corrections.
It is very likely that I overlooked something.

On a related note, I found the list of alternate implementations on the
libmicrohttpd homepage very helpful.  That’s classy.  Thanks.

My wishlist:

A processes (pre-fork) + [e]poll mode, like nginx has.  Processes have
pros and cons vs. threads: more reliable, faster malloc/free (no lock
contention risk), but OTOH slower context switches (if running process
count exceeds number of logical CPUs).  I would likely prefer this mode,
but all four libraries appear to be missing it.

Ability to accept not only HTTP, but also raw TCP connections, and
handle them in application code along with the library-handled HTTP.
Such as for implementing JSON-RPC directly over TCP, while also having
it over TCP+HTTP, and without having to manage an own/separate
threads/processes pool.  Do any of the four have this?  I found no such
examples with any of them.

Easily and cleanly embeddable into an application’s source tree, while
also allowing easy updates to new upstream versions.  mongoose almost
achieves this, but at the expense of sacrificing meaningful separation
into multiple translation units within the library itself.  I think we
don’t have to pay this price.  We could have multiple files (10 or so?),
in a subdirectory, which are also easy to list in a project’s Makefile.
Maybe I’d do that for libonion, freeing it from cmake, but then updating
to new upstream versions would be harder.  Do I really have to bite the
cmake or/and autotools bullet for something as simple as accepting HTTP?

I’d prefer a more permissive license like 2-clause BSD or MIT.  But I
guess I’ll have to settle on Apache 2 or such.  mongoose’ use of GPLv2
is understandable – need to make money – but is otherwise a disadvantage
(even for a commercial project that could pay, and even when publishing
any source code changes is not a problem and would be planned anyway; we
just don’t want to put our time into something that we would not always
be able to reuse in other projects).

Optional JSON from the same upstream is a plus, ideally exported both as
a generic JSON parser and as JSON-RPC support.  Looks like only libonion
sort of delivers both (but the code might not be production quality).

Ability to exclude more of the functionality – for example, to include
only the POST method (and not compile in code for the rest).  I am
concerned not so much about code size per se, as I am about attack
surface, and about ease of code reviews (not having to determine if some
compiled-in code is actually dead code in a given case, but to know
reliably that it’s not compiled in).

On a related note, David’s use of Coverity for libonion is commendable,
but it looks abandoned since 2014, and many “defects” (even if false
positives) remained unfixed back then.

Mark’s use of Coverity for libevhtp is also commendable… and looks
abandoned since May 10, 2015.  It shows “48,919 Lines of Code Analyzed”,
only “4 Total defects” and “0 Outstanding” – I guess it means that
everything detected by Coverity before (which must have been many more
“defects”) had been eliminated prior to that run.  That’s impressive.
But we don’t know how many new “defects” may have appeared in the 9
months that passed.  Also, I haven’t looked into whether libevent has
been subjected to similar static analysis or not (although being
initially written by Niels Provos speaks in its favor, given Niels’
other work), and accepting TCP connections isn’t as much risk as parsing
HTTP and JSON.

I don’t give a lot of weight to the Coverity results for my
decision-making, but it shows whether the maintainers care, and there
are few other somewhat-meaningful metrics I could use before having
spent time to analyze and try to use the code myself.

Why am I posting this to the onion mailing list specifically?  I find it
likely that libonion wins for me, although not by a large margin (and
there’s a lot that I dislike about it).  This is not a final decision
yet.  I might as well end up reverting to writing special-purpose code
from scratch.

Thanks,

Alexander

Missing diskspace in Linux

http://lunatic.no/2014/08/missing-diskspace-in-linux/

Missing diskspace in Linux

Today I had a problem with a server that had no more disk space. And I learned something new in the process.

df -h told me that 100% was in use. Which was about 29GB on that server.

But if I checked the root partition with du -shx / i got about 9GB of used space.

So off to checking where the space could have gone:

Inode usage

I know “du” does not take account for inodes, etc. But according to dumpe2fs /dev/sdx1 my Inode size * Inode count = about 700MB.
So that was not it.

“Hidden” directories under mountpoints

“du” will not see used space of files located in a path that is later mounted to another file-system. For example, if you have files in /home/ on your root partition, but has later mounted /home to its own partition. The files will be hidden behind the new mountpoint.

To be able to check these files without unmounting anything in use, I did the following:

mount --bind / /mnt
du -shx /mnt

If “du” would give me a different result now, I would have known that the files where hidden under one of my mountpoints. But sadly, they where not. I was starting to run out of options.

Deleted files

If a process opens a file, and then you delete it by rm thefile you will have the file deleted from the filesystem, but the inodes will not be freed before the process closes/releases the file. This is something I love about Linux/Posix systems, since that means that processes does not need to lock my files and I have full control over my files as opposed to other operating systems(windows). But I thought that when you delete a opened file, there is no easy way of knowing which deleted files are “held back” by processes. But there is!

lsof | grep deleted quickly gave me a list of files that has been deleted, but is held open by processes, and their size. In my case a deleted log file of 17GB in size was held by asterisk. So i reloaded the logger module of asterisk, and hey presto! The diskspace was available again.

Now only 9GB was “in use” according to df -h.

Remove GPT from a disk under ubuntu

Suppose /dev/sdb is the device to be cleand

 

sudo apt-get install gdisk

sudo gdisk /dev/sdb

> Enter 2,  select GPT

> Enter  ?,  get command list

> Enter x,   expert mode

> Enter ?,  get command list

> Enter z,   destroy GPT

 

# create dos partition table, and partition

sudo fdisk /dev/sdb

# format the dos partition,   say /dev/sdb1

sudo mkfs.msdos /dev/sdb1

转:Linux上Core Dump文件的形成和分析

Core,又称之为Core Dump文件,是Unix/Linux操作系统的一种机制,对于线上服务而言,Core令人闻之色变,因为出Core的过程意味着服务暂时不能正常响应,需要恢复,并且随着吐Core进程的内存空间越大,此过程可能持续很长一段时间(例如当进程占用60G+以上内存时,完整Core文件需要15分钟才能完全写到磁盘上),这期间产生的流量损失,不可估量。

凡事皆有两面性,OS在出Core的同时,虽然会终止掉当前进程,但是也会保留下第一手的现场数据,OS仿佛是一架被按下快门的相机,而照片就是产出的Core文件。里面含有当进程被终止时内存、CPU寄存器等信息,可以供后续开发人员进行调试。

关于Core产生的原因很多,比如过去一些Unix的版本不支持现代Linux上这种GDB直接附着到进程上进行调试的机制,需要先向进程发送终止信号,然后用工具阅读core文件。在Linux上,我们就可以使用kill向一个指定的进程发送信号或者使用gcore命令来使其主动出Core并退出。如果从浅层次的原因上来讲,出Core意味着当前进程存在BUG,需要程序员修复。从深层次的原因上讲,是当前进程触犯了某些OS层级的保护机制,逼迫OS向当前进程发送诸如SIGSEGV(即signal 11)之类的信号, 例如访问空指针或数组越界出Core,实际上是触犯了OS的内存管理,访问了非当前进程的内存空间,OS需要通过出Core来进行警示,这就好像一个人身体内存在病毒,免疫系统就会通过发热来警示,并导致人体发烧是一个道理(有意思的是,并不是每次数组越界都会出Core,这和OS的内存管理中虚拟页面分配大小和边界有关,即使不出Core,也很有可能读到脏数据,引起后续程序行为紊乱,这是一种很难追查的BUG)。

说了这些,似乎感觉Core很强势,让人感觉缺乏控制力,其实不然。控制Core产生的行为和方式,有两个途径:

1.修改/proc/sys/kernel/core_pattern文件,此文件用于控制Core文件产生的文件名,默认情况下,此文件内容只有一行内容:“core”,此文件支持定制,一般使用%配合不同的字符,这里罗列几种:

%p 出Core进程的PID

%u 出Core进程的UID

%s 造成Core的signal号

%t 出Core的时间,从1970-01-0100:00:00开始的秒数

%e 出Core进程对应的可执行文件名

2.Ulimit –C命令,此命令可以显示当前OS对于Core文件大小的限制,如果为0,则表示不允许产生Core文件。如果想进行修改,可以使用:

Ulimit –cn

其中n为数字,表示允许Core文件体积的最大值,单位为Kb,如果想设为无限大,可以执行:

Ulimit -cunlimited

产生了Core文件之后,就是如何查看Core文件,并确定问题所在,进行修复。为此,我们不妨先来看看Core文件的格式,多了解一些Core文件。

首先可以明确一点,Core文件的格式ELF格式,这一点可以通过使用readelf -h命令来证实,如下图:

从读出来的ELF头信息可以看到,此文件类型为Core文件,那么readelf是如何得知的呢?可以从下面的数据结构中窥得一二:

其中当值为4的时候,表示当前文件为Core文件。如此,整个过程就很清楚了。

了解了这些之后,我们来看看如何阅读Core文件,并从中追查BUG。在Linux下,一般读取Core的命令为:

gdb exec_file core_file

使用GDB,先从可执行文件中读取符号表信息,然后读取Core文件。如果不与可执行文件搅合在一起可以吗?答案是不行,因为Core文件中没有符号表信息,无法进行调试,可以使用如下命令来验证:

Objdump –x core_file | tail

我们看到如下两行信息:

SYMBOL TABLE:

no symbols

表明当前的ELF格式文件中没有符号表信息。

为了解释如何看Core中信息,我们来举一个简单的例子:

#include “stdio.h”

int main(){

int stack_of[100000000];

int b=1;

int* a;

*a=b;

}

这段程序使用gcc –g a.c –o a进行编译,运行后直接会Core掉,使用gdb a core_file查看栈信息,可见其Core在了这行代码:

int stack_of[100000000];

原因很明显,直接在栈上申请如此大的数组,导致栈空间溢出,触犯了OS对于栈空间大小的限制,所以出Core(这里是否出Core还和OS对栈空间的大小配置有关,一般为8M)。但是这里要明确一点,真正出Core的代码不是分配栈空间的int stack_of[100000000], 而是后面这句int b=1, 为何?出Core的一种原因是因为对内存的非法访问,在上面的代码中分配数组stack_of时并未访问它,但是在其后声明变量并赋值,就相当于进行了越界访问,继而出Core。为了解释得更详细些,让我们使用gdb来看一下出Core的地方,使用命令gdb a core_file可见:

可知程序出现了段错误“Segmentation fault”, 代码是int b=1这句。我们来查看一下当前的栈信息:

其中可见指令指针rip指向地址为0×400473, 我们来看下当前的指令是什么:

这条movl指令要把立即数1送到0xffffffffe8287bfc(%rbp)这个地址去,其中rbp存储的是帧指针,而0xffffffffe8287bfc很明显是一个负数,结果计算为-400000004。这就可以解释了:其中我们申请的int stack_of[100000000]占用400000000字节,b是int类型,占用4个字节,且栈空间是由高地址向低地址延伸,那么b的栈地址就是0xffffffffe8287bfc(%rbp),也就是$rbp-400000004。当我们尝试访问此地址时:

可以看到无法访问此内存地址,这是因为它已经超过了OS允许的范围。

下面我们把程序进行改进:

#include “stdio.h”

int main(){

int* stack_of = malloc(sizeof(int)*100000000);

int b=1;

int* a;

*a=b;

}

使用gcc –O3 –g a.c –o a进行编译,运行后会再次Core掉,使用gdb查看栈信息,请见下图:

可见BUG出在第7行,也就是*a=b这句,这时我们尝试打印b的值,却发现符号表中找不到b的信息。为何?原因在于gcc使用了-O3参数,此参数可以对程序进行优化,一个负面效应是优化过程中会舍弃部分局部变量,导致调试时出现困难。在我们的代码中,b声明时即赋值,随后用于为*a赋值。优化后,此变量不再需要,直接为*a赋值为1即可,如果汇编级代码上讲,此优化可以减少一条MOV语句,节省一个寄存器。

此时我们的调试信息已经出现了一些扭曲,为此我们重新编译源程序,去掉-O3参数(这就解释了为何一些大型软件都会有debug版本存在,因为debug是未经优化的版本,包含了完整的符号表信息,易于调试),并重新运行,得到新的core并查看,如下图:

这次就比较明显了,b中的值没有问题,有问题的是a,其指向的地址是非法区域,也就是a没有分配内存导致的Core。当然,本例中的问题其实非常明显,几乎一眼就能看出来,但不妨碍它成为一个例子,用来解释在看Core过程中,需要注意的一些问题。

Which hashing algorithm is best for uniqueness and speed?

http://programmers.stackexchange.com/questions/49550/which-hashing-algorithm-is-best-for-uniqueness-and-speed

I tested some different algorithms, measuring speed and number of collisions.

I used three different key sets:

A list of 216,553 English words (in lowercase)
The numbers “1” to “216553” (think ZIP codes, and how a poor hash took down msn.com)
216,553 “random” (i.e. type 4 uuid) GUIDs

For each corpus, the number of collisions and the average time spent hashing was recorded.

I tested:

DJB2
DJB2a (variant using xor rather than +)
FNV-1 (32-bit)
FNV-1a (32-bit)
SDBM
CRC32
Murmur2 (32-bit)
SuperFastHash

Results

Each result contains the average hash time, and the number of collisions

Hash Lowercase Random UUID Numbers
============= ============= =========== ==============
Murmur 145 ns 259 ns 92 ns
6 collis 5 collis 0 collis
FNV-1a 152 ns 504 ns 86 ns
4 collis 4 collis 0 collis
FNV-1 184 ns 730 ns 92 ns
1 collis 5 collis 0 collis▪
DBJ2a 158 ns 443 ns 91 ns
5 collis 6 collis 0 collis▪▪▪
DJB2 156 ns 437 ns 93 ns
7 collis 6 collis 0 collis▪▪▪
SDBM 148 ns 484 ns 90 ns
4 collis 6 collis 0 collis**
SuperFastHash 164 ns 344 ns 118 ns
85 collis 4 collis 18742 collis
CRC32 250 ns 946 ns 130 ns
2 collis 0 collis 0 collis
LoseLose 338 ns – –
215178 collis

Notes:

The LoseLose algorithm (where hash = hash+character) is truly awful. Everything collides into the same 1,375 buckets
SuperFastHash is fast, with things looking pretty scattered; by my goodness the number collisions. I’m hoping the guy who ported it got something wrong; it’s pretty bad
CRC32 is pretty good. Slower, and a 1k lookup table

Do collisions actually happen?

Yes. I started writing my test program to see if hash collisions actually happen – and are not just a theoretical construct. They do indeed happen:

FNV-1 collisions

creamwove collides with quists

FNV-1a collisions

costarring collides with liquid
declinate collides with macallums
altarage collides with zinke
altarages collides with zinkes

Murmur2 collisions

cataract collides with periti
roquette collides with skivie
shawl collides with stormbound
dowlases collides with tramontane
cricketings collides with twanger
longans collides with whigs

DJB2 collisions

hetairas collides with mentioner
heliotropes collides with neurospora
depravement collides with serafins
stylist collides with subgenera
joyful collides with synaphea
redescribed collides with urites
dram collides with vivency

DJB2a collisions

haggadot collides with loathsomenesses
adorablenesses collides with rentability
playwright collides with snush
playwrighting collides with snushing
treponematoses collides with waterbeds

CRC32 collisions

codding collides with gnu
exhibiters collides with schlager

SuperFastHash collisions

dahabiah collides with drapability
encharm collides with enclave
grahams collides with gramary
…snip 79 collisions…
night collides with vigil
nights collides with vigils
finks collides with vinic

Randomnessification

The other subjective measure is how randomly distributed the hashes are. Mapping the resulting HashTables shows how evenly the data is distributed. All the hash functions show good distribution when mapping the table linearly:

Enter image description here

Or as a Hilbert Map (XKCD is always relevant):

Enter image description here

Except when hashing number strings (“1″, “2”, …, “216553”) (for example, zip codes), where patterns begin to emerge in most of the hashing algorithms:

SDBM:

Enter image description here

DJB2a:

Enter image description here

FNV-1:

Enter image description here

All except FNV-1a, which still look plenty random to me:

Enter image description here

In fact, Murmur2 seems to have even better randomness with Numbers than FNV-1a:

Enter image description here

When I look at the FNV-1a “number” map, I think I see subtle vertical patterns. With Murmur I see no patterns at all. What do you think?

The extra * in the above table denotes how bad the randomness is. With FNV-1a being the best, and DJB2x being the worst:

Murmur2: .
FNV-1a: .
FNV-1: ▪
DJB2: ▪▪
DJB2a: ▪▪
SDBM: ▪▪▪
SuperFastHash: .
CRC: ▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
Loselose: ▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪

▪▪▪▪▪▪▪▪▪▪▪▪▪
▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪
▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪▪

I originally wrote this program to decide if I even had to worry about collisions: I do.

And then it turned into making sure that the hash functions were sufficiently random.
FNV-1a algorithm

The FNV1 hash comes in variants that return 32, 64, 128, 256, 512 and 1024 bit hashes.

The FNV-1a algorithm is:

hash = FNV_offset_basis
for each octetOfData to be hashed
hash = hash xor octetOfData
hash = hash * FNV_prime
return hash

Where the constants FNV_offset_basis and FNV_prime depend on the return hash size you want:

Hash Size Prime Offset
=========== =========================== =================================
32-bit 16777619 2166136261
64-bit 1099511628211 14695981039346656037
128-bit 309485009821345068724781371 144066263297769815596495629667062367629
256-bit
prime: 2^168 + 2^8 + 0x63 = 374144419156711147060143317175368453031918731002211
offset: 100029257958052580907070968620625704837092796014241193945225284501741471925557
512-bit
prime: 2^344 + 2^8 + 0x57 = 35835915874844867368919076489095108449946327955754392558399825615420669938882575126094039892345713852759
offset: 9659303129496669498009435400716310466090418745672637896108374329434462657994582932197716438449813051892206539805784495328239340083876191928701583869517785
1024-bit
prime: 2^680 + 2^8 + 0x8d = 5016456510113118655434598811035278955030765345404790744303017523831112055108147451509157692220295382716162651878526895249385292291816524375083746691371804094271873160484737966720260389217684476157468082573
offset: 1419779506494762106872207064140321832088062279544193396087847491461758272325229673230371772250864096521202355549365628174669108571814760471015076148029755969804077320157692458563003215304957150157403644460363550505412711285966361610267868082893823963790439336411086884584107735010676915

See the main FNV page for details.

As a practical matter:

32-bit UInt32,
64-bit UInt64, and
128-bit Guid can be useful

All my results are with the 32-bit variant.
FNV-1 better than FNV-1a?

No. FNV-1a is all around better. There was more collisions with FNV-1a when using the English word corpus:

Hash Word Collisions
====== ===============
FNV-1 1
FNV-1a 4

Now compare lowercase and uppercase:

Hash lowercase word Collisions UPPERCASE word collisions
====== ========================= =========================
FNV-1 1 9
FNV-1a 4 11

In this case FNV-1a isn’t “400%” worse than FN-1, only 20% worse.

I think the more important takeaway is that there are two classes of algorithms when it comes to collisions:

collisions rare: FNV-1, FNV-1a, DJB2, DJB2a, SDBM
collisions common: SuperFastHash, Loselose

And then there’s the how evenly distributed the hashes are:

outstanding distribution: Murmur2, FNV-1a, SuperFastHas
excellent distribution: FNV-1
good distribution: SDBM, DJB2, DJB2a
horrible distribution: Loselose

Update

Murmur? Sure, why not

Update

@whatshisname wondered how a CRC32 would perform, added numbers to the table.

CRC32 is pretty good. Few collisions, but slower, and the overhead of a 1k lookup table.

Snip all erroneous stuff about CRC distribution – my bad

Up until today I was going to use FNV-1a as my de facto hash-table hashing algorithm. But now I’m switching to Murmur2:

Faster
Better randomnessification of all classes of input

And I really, really hope there’s something wrong with the SuperFastHash algorithm I found; it’s too bad to be as popular as it is.

Update: From the MurmurHash3 homepage on Google:

(1) – SuperFastHash has very poor collision properties, which have been documented elsewhere.

So I guess it’s not just me.

Update: I realized why Murmur is faster than the others. MurmurHash2 operates on four bytes at a time. Most algorithms are byte by byte:

for each octet in Key
AddTheOctetToTheHash

This means that as keys get longer Murmur gets its chance to shine.

Update
GUIDs are designed to be unique, not random

A timely post by Raymond Chen reiterates the fact that “random” GUIDs are not meant to be used for their randomness. They, or a subset of them, are unsuitable as a hash key:

Even the Version 4 GUID algorithm is not guaranteed to be unpredictable, because the algorithm does not specify the quality of the random number generator. The Wikipedia article for GUID contains primary research which suggests that future and previous GUIDs can be predicted based on knowledge of the random number generator state, since the generator is not cryptographically strong.

Randomess is not the same as collision avoidance; which is why it would be a mistake to try to invent your own “hashing” algorithm by taking some subset of a “random” guid:

int HashKeyFromGuid(Guid type4uuid)
{
//A “4” is put somewhere in the GUID.
//I can’t remember exactly where, but it doesn’t matter for
//the illustrative purposes of this pseudocode
int guidVersion = ((type4uuid.D3 & 0x0f00) >> 8);
Assert(guidVersion == 4);

return (int)GetFirstFourBytesOfGuid(type4uuid);
}

Note: Again, I put “random GUID” in quotes, because it’s the “random” variant of GUIDs. A more accurate description would be Type 4 UUID. But nobody knows what type 4, or types 1, 3 and 5 are. So it’s just easier to call them “random” GUIDs.
All English Words mirrors

http://www.filedropper.com/allenglishwords

https://web.archive.org/web/20070221060514/http://www.sitopreferito.it/html/all_english_words.html

shareimprove this answer

edited Nov 9 ’15 at 18:12

answered Apr 23 ’12 at 12:42
Ian Boyd
12.6k1911

181

some of the collisions make awesome band names. Particularly ‘Adorable Rentability’ – mcfinnigan Apr 23 ’12 at 14:35
27

Also, I’d love to hear how you generated these results (source code and/or tools) – Earlz Apr 23 ’12 at 17:40
32

@Earlz Development tool is Delphi. i assume you mean the images though. For “linear” map i created a square bitmap of size nxn, (where n = Ceil(sqrt(hashTable.Capacity))). Rather than simply black for list entry is occupied and white for list entry is empty, i used an HSLtoRGB function, where the hue ranged from 0 (red) to 300 (magenta). White is still an “empty list cell”. For the Hilbert map i had to hunt wikipedia for the algorithm that turns an index into an (x,y) coordinate. – Ian Boyd Apr 23 ’12 at 18:15
32

I’ve removed a number of comments that were along the lines of “+1 great answer!” – Please don’t post comments that don’t ask for clarifications or add information to the answer, if you feel it’s a great answer, upvote it ;) – Yannis♦ Apr 28 ’12 at 20:34
11

It would be really interesting to see how SHA compares, not because it’s a good candidate for a hashing algorithm here but it would be really interesting to see how any cryptographic hash compares with these made for speed algorithms. – Michael May 25 ’12 at 15:09
show 69 more comments
up vote 30 down vote

Here is a list of hash functions, but the short version is:

If you just want to have a good hash function, and cannot wait, djb2 is one of the best string hash functions i know. It has excellent distribution and speed on many different sets of keys and table sizes

unsigned long
hash(unsigned char *str)
{
unsigned long hash = 5381;
int c;

while (c = *str++)
hash = ((hash << 5) + hash) + c; /* hash * 33 + c */

return hash;
}

shareimprove this answer

answered Feb 19 '11 at 1:13
Dean Harding
17.8k33967

2

Actually djb2 is zero sensitive, as most such simple hash functions, so you can easily break such hashes. It has a bad bias too many collisions and a bad distribution, it breaks on most smhasher quality tests: See github.com/rurban/smhasher/blob/master/doc/bernstein His cdb database uses it, but I wouldn't use it with public access. – rurban Aug 20 '14 at 6:03
add a comment
up vote 26 down vote

If you are wanting to create a hash map from an unchanging dictionary, you might want to consider perfect hashing https://en.wikipedia.org/wiki/Perfect_hash_function – during the construction of the hash function and hash table, you can guarantee, for a given dataset, that there will be no collisions.
shareimprove this answer

answered May 25 '12 at 3:16
Damien
26132

4

+1 I wasn't aware of such an algorithm – Earlz May 25 '12 at 5:00
2

Here's more about (minimal) Perfect Hashing burtleburtle.net/bob/hash/perfect.html including performance data, although it doesn't use the most current processor etc. – Ellie Kesselman May 29 '12 at 12:24
3

It's pretty obvious, but worth pointing out that in order to guarantee no collisions, the keys would have to be the same size as the values, unless there are constraints on the values the algorithm can capitalize on. – devios Apr 4 '13 at 20:34

I improved gperf and provide a nice frontend to most perfect hash generators at github.com/rurban/Perfect-Hash It's not yet finished, but already better then the existing tools. – rurban Aug 20 '14 at 6:05

@devios: I've recently learned that there are several hash table algorithms that guarantee no collisions, even when you use long strings as keys, strings much longer than the hash table index values generated by the hash function, without any constraints on those strings. See cs.stackexchange.com/questions/477/… . – David Cary Jun 8 '15 at 17:11
show 1 more comment
up vote 20 down vote

CityHash by Google is the algorithm you are looking for. It is not good for cryptography but is good for generating unique hashes.

Read the blog for more details and the code is available here.

CityHash is written in C++. There also is a plain C port.

About 32-bit support:

All the CityHash functions are tuned for 64-bit processors. That said, they will run (except for the new ones that use SSE4.2) in 32-bit code. They won't be very fast though. You may want to use Murmur or something else in 32-bit code.

shareimprove this answer

edited May 29 '12 at 11:56
JanX2
1051

answered May 25 '12 at 10:29
Vipin Parakkat
30924

6

Is CityHash pronounced similar to "City Sushi?" – Eric Mar 20 '13 at 21:20

Have a look at SipHash too, it is meant to replace MurmurHash/CityHash/etc. : 131002.net/siphash – Török Edwin Oct 15 '13 at 8:47
1

Also see FarmHash, a successor to CitHash. code.google.com/p/farmhash – stevendaniels Mar 18 '15 at 13:15
1

xxHash claims to be 5x faster than CityHash. – Clay Bridges May 22 '15 at 15:56
add a comment
up vote 12 down vote

The SHA algorithms (including SHA-256) are designed to be fast.

In fact, their speed can be a problem sometimes. In particular, a common technique for storing a password-derived token is to run a standard fast hash algorithm 10,000 times (storing the hash of the hash of the hash of the hash of the … password).

#!/usr/bin/env ruby
require 'securerandom'
require 'digest'
require 'benchmark'

def run_random_digest(digest, count)
v = SecureRandom.random_bytes(digest.block_length)
count.times { v = digest.digest(v) }
v
end

Benchmark.bmbm do |x|
x.report { run_random_digest(Digest::SHA256.new, 1_000_000) }
end

Output:

Rehearsal ————————————
1.480000 0.000000 1.480000 ( 1.391229)
————————— total: 1.480000sec

user system total real
1.400000 0.000000 1.400000 ( 1.382016)

shareimprove this answer

answered Feb 19 '11 at 0:21
yfeldblum
1,32579

36

It's relatively fast, sure, for a cryptographic hashing algorithm. But the OP just wants to store values in a hashtable, and I don't think a cryptographic hash function is really appropriate for that. – Dean Harding Feb 19 '11 at 1:10
6

The question brought up (tangentially, it now appears) the subject of the cryptographic hash functions. That's the bit I am responding to. – yfeldblum Feb 22 '11 at 13:14
7

Just to put people off the idea of "In particular, a common technique for storing a password-derived token is to run a standard fast hash algorithm 10,000 times" — while common, that's just plain stupid. There are algorithms designed for these scenarios, e.g., bcrypt. Use the right tools. – TC1 Oct 14 '13 at 13:19
1

Cryptographic hashes are designed to have a high throughput, but that often means they have high setup, teardown, .rodata and/or state costs. When you want an algorithm for a hashtable, you usually have very short keys, and lots of them, but do not need the additional guarantees of a cryptographic has. I use a tweaked Jenkins’ one-at-a-time myself. – mirabilos Dec 6 '13 at 13:57
add a comment
up vote 9 down vote

I've plotted a short speed comparasion of different hashing algorithms when hashing files.

The individual plots only differ slightly in the reading method and can be ignored here, since all files were stored in a tmpfs. Therefore the benchmark was not IO-Bound if you are wondering.

Linear Scale
Logarithmic scale

Algorithms include: SpookyHash, CityHash, Murmur3, MD5, SHA{1,256,512}.

Conclusions:

Non-cryptographich hasfunctions like Murmur3, Cityhash and Spooky are pretty close together. One should note that Cityhash may be faster on CPUs with SSE 4.2s CRC instruction, which my CPU does not have. SpookyHash was in my case always a tiny bit before CityHash.
MD5 seems to be a good tradeoff when using cryptographic hashfunctions, although SHA256 may be more secure to the collision vulnerabilities of MD5 and SHA1.
The complexity of all algorithms is linear – which is really not surprising since they work blockwise. (I wanted to see if the reading method makes a difference, so you can just compare the rightmost values).
SHA256 was slower than SHA512.
I did not investigate the randomness of the hashfunctions. But here is a good comparasion of the hashfunctions that are missing in Ian Boyds answer. This points out that CityHash has some problems in corner cases.

The source used for the plots:

https://github.com/sahib/rmlint/tree/gh-pages/plots (sorry for the ugly code)