How to ...

1. How to nginx are designed and work ?

- NGINX uses an asynchronous, event‑driven approach to handling connections.
- One worker process per CPU core
- When an NGINX server is active, only the worker processes are busy. Each worker process handles multiple connections in a nonblocking fashion, reducing the number of context switches
- Command: ps axw -o pid,ppid,user,%cpu,vsz,wchan,command | egrep '(nginx|PID)'

- worker_processes – The number of NGINX worker processes (the default is 1). In most cases, running one worker process per CPU core works well, and we recommend setting this directive to auto to achieve that. There are times when you may want to increase this number, such as when the worker processes have to do a lot of disk I/O.
- worker_connections – The maximum number of connections that each worker process can handle simultaneously. The default is 512, but most systems have enough resources to support a larger number. The appropriate setting depends on the size of the server and the nature of the traffic, and can be discovered through testing.

2. How to apache work ?

- An Apache server will handle numerous HTTP requests at a single time. In order to do this, the server has to run on multiple "threads" of execution. A thread is a part of a program that branches off from the main program and runs at the same time in order to accomplish a specific task. The Apache server will create a new thread for each HTTP request that will handle fetching and returning the requested Web page. This allows Apache to serve Web pages to multiple users at the same time.

3. How nodejs work, Multithreaded, Singlethreaded ? 

- Node is event-driven and single-threaded with background workers

- Multithreaded network app:

Multithreaded network apps handle the above workload like this:

request ──> spawn thread
              └──> wait for database request
                     └──> answer request
request ──> spawn thread
              └──> wait for database request
                     └──> answer request
request ──> spawn thread
              └──> wait for database request
                     └──> answer request

So the thread spend most of their time using 0% CPU waiting for the database to return data. While doing so they have had to allocate the memory required for a thread which includes a completely separate program stack for each thread etc. Also, they would have to start a thread which while is not as expensive as starting a full process is still not exactly cheap.

- Singlethreaded event loop

Since we spend most of our time using 0% CPU, why not run some code when we're not using CPU? That way, each request will still get the same amount of CPU time as multithreaded applications but we don't need to start a thread. So we do this:

request ──> make database request
request ──> make database request
request ──> make database request
database request complete ──> send response
database request complete ──> send response
database request complete ──> send response

In practice both approaches return data with roughly the same latency since it's the database response time that dominates the processing.

The main advantage here is that we don't need to spawn a new thread so we don't need to do lots and lots of malloc which would slow us down.

- Magic, invisible threading

The seemingly mysterious thing is how both the approaches above manage to run workload in "parallel"? The answer is that the database is threaded. So our single-threaded app is actually leveraging the multi-threaded behaviour of another process: the database.

- Where singlethreaded approach fails

A singlethreaded app fails big if you need to do lots of CPU calculations before returning the data. Now, I don't mean a for loop processing the database result. That's still mostly O(n). What I mean is things like doing Fourier transform (mp3 encoding for example), ray tracing (3D rendering) etc.

Another pitfall of singlethreaded apps is that it will only utilise a single CPU core. So if you have a quad-core server (not uncommon nowdays) you're not using the other 3 cores.

- Where multithreaded approach fails

A multithreaded app fails big if you need to allocate lots of RAM per thread. First, the RAM usage itself means you can't handle as many requests as a singlethreaded app. Worse, malloc is slow. Allocating lots and lots of objects (which is common for modern web frameworks) means we can potentially end up being slower than singlethreaded apps. This is where node.js usually win.

One use-case that end up making multithreaded worse is when you need to run another scripting language in your thread. First you usually need to malloc the entire runtime for that language, then you need to malloc the variables used by your script.

So if you're writing network apps in C or go or java then the overhead of threading will usually not be too bad. If you're writing a C web server to serve PHP or Ruby then it's very easy to write a faster server in javascript or Ruby or Python.

- Hybrid approach

Some web servers use a hybrid approach. Nginx and Apache2 for example implement their network processing code as a thread pool of event loops. Each thread runs an event loop simultaneously processing requests single-threaded but requests are load-balanced among multiple threads.

Some single-threaded architectures also use a hybrid approach. Instead of launching multiple threads from a single process you can launch multiple applications - for example, 4 node.js servers on a quad-core machine. Then you use a load balancer to spread the workload amongst the processes.

4. Http2 vs http1

Main goals of developing HTTP/2 was:

- Protocol negotiation mechanism — protocol electing, eg. HTTP/1.1, HTTP/2 or other.
- High-level compatibility with HTTP/1.1 — methods, status codes, URIs and header fields.
- Page load speed improvements trough:
- Compression of request headers
- Binary protocol
- HTTP/2 Server Push
- Request multiplexing over a single TCP connection
- Request pipelining
- HOL blocking (Head-of-line) — Package blocking

5. What is WSGI, uWSGI ? 

WSGI: A Python spec that defines a standard interface for communication between an application or framework and an application/web server. This was created in order to simplify and standardize communication between these components for consistency and interchangeability. This basically defines an API interface that can be used over other protocols.

uWSGI: An application server container that aims to provide a full stack for developing and deploying web applications and services. The main component is an application server that can handle apps of different languages. It communicates with the application using the methods defined by the WSGI spec, and with other web servers over a variety of other protocols. This is the piece that translates requests from a conventional web server into a format that the application can process.

uwsgi: A fast, binary protocol implemented by the uWSGI server to communicate with a more full-featured web server. This is a wire protocol, not a transport protocol. It is the preferred way to speak to web servers that are proxying requests to uWSGI.

A common nginx config is the following:

location / {
    include uwsgi_params;

6. Init vs Systemd ? 

The "init" process is been around since the UNIX days.It is the part of the Linux standard base(LSB). It's something that we been working with so many years but there are newer versions that are replacing init like systemd
The "systemd" is a new kind of init system, which replaces init. It was created at Red Hat. It faster boot-up time because it runs the scripts/daemons parallel i.e. multiple scripts at a time.

7. Differences between: Fast-CGI, CGI, Mod-PHP, PHP-FPM

First, the two protocols:

- CGI scripts is a way how to run a server side script when a HTTP request comes; this has nothing to do with PHP

- FastCGI is a "better CGI" - CGI is known to be slow, Fast CGI is a different approach with much faster results; this has also nothing to do with PHP.

Now the PHP related things:

- mod_php is running a PHP as Apache module - that is PHP request is run under Apache process with everything that goes with it - Apache processes are defined by Apache configuration, PHP is run with Apache permission etc.

- php-fpm is PHP's FastCGI implementation; PHP-FPM runs as a standalone FastCGI server.

7. When deploying a Go application, should I place it behind reserve proxy such as Nginx or just use Go's built in web server?

Placing an application behind a proxy server has the following advantages:

1. You can easily setup ssl certificates with nginx but when using built in go server it will a lot harder.

2. Nginx for example is very good at serving static files,you can cache files which do not change frequently thus reducing latency.What this means is that your offload your go server the duty of serving files .

3. Nginx allow you to easily run your application on port 80 with ease thus your can hook your domain with ease to your application

4. Its more secure to use nginx as it receive regular security updates.

5. Its ease to scale when running golang behind nginx since it has built in load balancing support.

Deploy nginx as proxy with glolang:

- Create service to auto run golang server
- Use nginx as proxy server and pass to golang server

8. Achieving concurrency in Go

- What is concurrency?

- What is parallelism?

- Concurrency vs parallelism

- What is a computer process?

- What is a thread?

- Thread scheduling

- Concurrency in Go

- Threads vs goroutines

From Go 1.5 By default, Go programs run with GOMAXPROCS set to the number of cores available; in prior releases it defaulted to 1.

9. SSL

- How SSL Certificates Work

+ A browser or server attempts to connect to a website (i.e. a web server) secured with SSL. The browser/server requests that the web server identify itself.
+ The web server sends the browser/server a copy of its SSL certificate.
+ The browser/server checks to see whether or not it trusts the SSL certificate. If so, it sends a message to the web server.
+ The web server sends back a digitally signed acknowledgement to start an SSL encrypted session.
+ Encrypted data is shared between the browser/server and the web server.

- Self-signed certificate

+ A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.

+ A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where the encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate

- Where Do I Buy An SSL Certificate?

+ SSL Certificates need to be issued from a trusted Certificate Authority. Browsers, operating systems, and mobile devices maintain list of trusted CA root certificates.

+ The Root Certificate must be present on the end user's machine in order for the Certificate to be trusted. If it is not trusted the browser will present untrusted error messages to the end user. In the case of e-commerce, such error messages result in immediate lack of confidence in the website and organizations risk losing confidence and business from the majority of consumers.

 Why should buy a SSL Certificate from trusted Certificate Authority

The SSL certificate solves two purposes: encryption of traffic (for RSA key exchange, at least) and verification of trust. As you know, you can encrypt traffic with (or without, if we're talking SSL 3.0 or TLS) any self-signed certificate. But trust is accomplished through a chain of certificates. I don't know you, but I do trust verisign (or at least Microsoft does, because they've been paid lots of money to get it installed in their operating systems by default), and since Verisign trusts you, then I trust you too. As a result, there's no scary warning when I go to such an SSL page in my Web browser because somebody that I trust has said you are who you are.