Encrypt WordPress Server with Let’s Encrypt SSL certificate

# Install acme.sh tool
git clone https://github.com/Neilpang/acme.sh.git

cd acme.sh

./acme.sh --install

#install cert
cd ~/.acme.sh
# issue a RSA cert
sudo ./acme.sh --issue -d blog.zhenglei.net -w /var/www/html/wordpress

# issue a ECC cert
sudo ./acme.sh --issue -d blog.zhenglei.net -w /var/www/html/wordpress --keylength ec-256

# Copy the cert into target directory
sudo mkdir -p /etc/nginx/ssl

sudo ./acme.sh --installcert -d blog.zhenglei.net --key-file /etc/nginx/ssl/blog.zhenglei.net.ecc.key --fullchain-file /etc/nginx/ssl/blog.zhenglei.net.ecc.bundle --ecc
sudo ./acme.sh --installcert -d blog.zhenglei.net --key-file /etc/nginx/ssl/blog.zhenglei.net.key --fullchain-file /etc/nginx/ssl/blog.zhenglei.net.bundle
# Update nginx config
server { #listen 80; listen 443; ssl on; ssl_certificate ssl/blog.zhenglei.net.bundle; ssl_certificate_key ssl/blog.zhenglei.net.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; ... }
server {
listen 80 default_server; server_name blog.zhenglei.net;

# Let's Encrypt, http method
location ~ \.well-known
{
root /var/www/html/wordpress/;
allow all;
access_log on;
log_not_found on;
} return 301 https://$server_name$request_uri; }

Install post-view plus plugin in WordPress

Two modification in  Themes of Twentyeleven, which used by this blog:

List most top 10 articles of blog,  add following into the index.php

<?php if (function_exists(‘get_most_viewed’)): ?>
<ul>
<?php get_most_viewed(); ?>
</ul>
<?php endif; ?>

View count for each article,   add following code into content.php

<span>
<?php if(function_exists(‘the_views’)) { the_views(); } ?>
</span>
<span class=”sep”> | </span>

Web development in C

davidmoreno/onion

kore

klone

nxweb

 

fast C HTTP server library comparison & wishlist

Hi,

Trying to choose an embeddable HTTP server library for a project, and
also considering writing my own special-purpose code, I came up with
the following comparison of libonion vs. other C libraries that include
high-performance HTTP support and are currently maintained.

Licenses:

libevhtp+libevent – 3-clause BSD
libmicrohttpd – LGPL 2.1
libonion – Apache 2 (except for some examples) or GPLv2+
mongoose – GPLv2 (and commercial)

Build environment:

libevhtp+libevent – cmake+autotools
libmicrohttpd – autotools
libonion – cmake
mongoose – none (one large file, like SQLite)

Code size (“text” as reported by the size(1) command on the library or
on a tiny sample program if statically linked, on Scientific Linux 6.6
on x86_64):

libevhtp+libevent – ~500 KB, or ~200 KB without unicode.c.o and reg*.c.o
libmicrohttpd – ~100 KB default, ~55 KB with most ./configure –disable-*
libonion – ~100 KB with most ONION_USE_* set to false
mongoose – ~100 KB including JSON-RPC

For the smaller builds of libmicrohttpd and libonion, I kept threads
support enabled, but disabled pretty much everything else that could be
disabled without patching the code.  It looks like libmicrohttpd wins
this test.  Maybe there’s more code in libonion to disable (make into
compile-time options) – I haven’t checked yet.

Built-in JSON support:

libevhtp+libevent – none
libmicrohttpd – none
libonion – JSON builtin, JSON-RPC in Apache 2 licensed example
mongoose – JSON-RPC builtin (simple JSON parser not exported?)

All of this is for current versions on GitHub or in recent release
tarballs as of a few days ago.

Maybe someone else will find this useful.  I’d appreciate corrections.
It is very likely that I overlooked something.

On a related note, I found the list of alternate implementations on the
libmicrohttpd homepage very helpful.  That’s classy.  Thanks.

My wishlist:

A processes (pre-fork) + [e]poll mode, like nginx has.  Processes have
pros and cons vs. threads: more reliable, faster malloc/free (no lock
contention risk), but OTOH slower context switches (if running process
count exceeds number of logical CPUs).  I would likely prefer this mode,
but all four libraries appear to be missing it.

Ability to accept not only HTTP, but also raw TCP connections, and
handle them in application code along with the library-handled HTTP.
Such as for implementing JSON-RPC directly over TCP, while also having
it over TCP+HTTP, and without having to manage an own/separate
threads/processes pool.  Do any of the four have this?  I found no such
examples with any of them.

Easily and cleanly embeddable into an application’s source tree, while
also allowing easy updates to new upstream versions.  mongoose almost
achieves this, but at the expense of sacrificing meaningful separation
into multiple translation units within the library itself.  I think we
don’t have to pay this price.  We could have multiple files (10 or so?),
in a subdirectory, which are also easy to list in a project’s Makefile.
Maybe I’d do that for libonion, freeing it from cmake, but then updating
to new upstream versions would be harder.  Do I really have to bite the
cmake or/and autotools bullet for something as simple as accepting HTTP?

I’d prefer a more permissive license like 2-clause BSD or MIT.  But I
guess I’ll have to settle on Apache 2 or such.  mongoose’ use of GPLv2
is understandable – need to make money – but is otherwise a disadvantage
(even for a commercial project that could pay, and even when publishing
any source code changes is not a problem and would be planned anyway; we
just don’t want to put our time into something that we would not always
be able to reuse in other projects).

Optional JSON from the same upstream is a plus, ideally exported both as
a generic JSON parser and as JSON-RPC support.  Looks like only libonion
sort of delivers both (but the code might not be production quality).

Ability to exclude more of the functionality – for example, to include
only the POST method (and not compile in code for the rest).  I am
concerned not so much about code size per se, as I am about attack
surface, and about ease of code reviews (not having to determine if some
compiled-in code is actually dead code in a given case, but to know
reliably that it’s not compiled in).

On a related note, David’s use of Coverity for libonion is commendable,
but it looks abandoned since 2014, and many “defects” (even if false
positives) remained unfixed back then.

Mark’s use of Coverity for libevhtp is also commendable… and looks
abandoned since May 10, 2015.  It shows “48,919 Lines of Code Analyzed”,
only “4 Total defects” and “0 Outstanding” – I guess it means that
everything detected by Coverity before (which must have been many more
“defects”) had been eliminated prior to that run.  That’s impressive.
But we don’t know how many new “defects” may have appeared in the 9
months that passed.  Also, I haven’t looked into whether libevent has
been subjected to similar static analysis or not (although being
initially written by Niels Provos speaks in its favor, given Niels’
other work), and accepting TCP connections isn’t as much risk as parsing
HTTP and JSON.

I don’t give a lot of weight to the Coverity results for my
decision-making, but it shows whether the maintainers care, and there
are few other somewhat-meaningful metrics I could use before having
spent time to analyze and try to use the code myself.

Why am I posting this to the onion mailing list specifically?  I find it
likely that libonion wins for me, although not by a large margin (and
there’s a lot that I dislike about it).  This is not a final decision
yet.  I might as well end up reverting to writing special-purpose code
from scratch.

Thanks,

Alexander

HTTP PROXY 协议

https://imququ.com/post/web-proxy.html

 

HTTP 代理存在两种形式,分别简单介绍如下:

第一种是 RFC 7230 – HTTP/1.1: Message Syntax and Routing(即修订后的 RFC 2616,HTTP/1.1 协议的第一部分)描述的普通代理。这种代理扮演的是「中间人」角色,对于连接到它的客户端来说,它是服务端;对于要连接的服务端来说,它是客户端。它就负责在两端之间来回传送 HTTP 报文。

第二种是 Tunneling TCP based protocols through Web proxy servers(通 过 Web 代理服务器用隧道方式传输基于 TCP 的协议)描述的隧道代理。它通过 HTTP 协议正文部分(Body)完成通讯,以 HTTP 的方式实现任意基于 TCP 的应用层协议代理。这种代理使用 HTTP 的 CONNECT 方法建立连接,但 CONNECT 最开始并不是 RFC 2616 – HTTP/1.1 的一部分,直到 2014 年发布的 HTTP/1.1 修订版中,才增加了对 CONNECT 及隧道代理的描述,详见 RFC 7231 – HTTP/1.1: Semantics and Content。实际上这种代理早就被广泛实现。

嵌入式web服务器

转:http://www.cnblogs.com/xmphoenix/archive/2011/04/12/2013394.html

boa、thttpd、mini_httpd、shttpd、lighttpd、goaheand、appweb

 

一个网友的个人意见:

boa 的功能比较齐全, 便对嵌入式应用很多功能就是冗余(如virtual host), 内存使用量较大些.
thttpd 功能较少, 实现简单. 内存使用量较少. 同时比较方便扩展.
shttpd 功能功能算是比较全的, 但在处理二进制数据时不够稳定, 时有异常. 有待观察.
light-httpd, apache 属重量级服务器, 成熟稳定, 体积较大, 在复杂的嵌入式应用上可选用.
goAhead 是个比较专用的 webserver, 大部分功能都在服务它自己提供的 goform 功能和
ASP/javascript 功能. 最后的 2.1.8 版仍有不少bug. (见下)
mini-httpd 与 thttpd 是同一家, 功能几乎完全一样.
boa 缺陷:
(1) 未提供 CGI 解析头处理.
可按这个地址方便修改. http://bbs.chinaunix.net/viewthread.php?tid=824840
(2) 对 POST 数据使用临时文件缓冲, 对无法创临时文件的小系统系统, 需要手工改下这部代码.
很多人报告在移植时不能POST 数据, 都是这个原因.
(3) …
thttpd 缺陷:
(1)  CGI1.1 标准支持不完整(不般影响不大), 未提供对协议要求的其它HTTP头处理,
如:If-Modified-Since, Accept-Language等应用程序就收不到.
(2) 直接使用 socket 到 CGI 应用的重定, 会导致提供大量 POST 数据时(如上传文件),
CGI应用不读完全部 POST 数据就无法向浏览器应答 bug
(3) …
goAhead 缺陷:
(1) 专用, 如喜欢它提供的 goform和 asp 令论.
(2) CGI 对二进制输出有很多 bug.
(3) 为实现单一任务处理, 在很平台采用延时轮询接收队列, 处理效率不高.
(4) 其它 bug 有不一罗列了, 移植时要一个个订下.