OnionFlation: How Attackers Weaponise Tor’s Only DoS Defence Against Itself

Tor’s proof-of-work puzzle system was designed as the one reliable defence against denial-of-service attacks on onion services. It was clever, it worked, and then a group of security researchers spent the better part of a year figuring out how to turn it into a weapon. The resulting family of attacks, dubbed OnionFlation, can take down any onion service for roughly $1.20 upfront and 10 cents an hour to maintain. The Tor project has acknowledged the issue. It is not yet patched.

OnionFlation Tor attack diagram
OnionFlation: weaponising Tor’s proof-of-work defence against the users it was built to protect.

Why Onion Services Have Always Been a DoS Magnet

Before understanding OnionFlation, you need to understand the original problem it was supposed to solve. Onion services have always been disproportionately easy to knock offline, and the reason is architectural. On the clearnet, denial-of-service defences rely on one thing above all else: knowing who is attacking you. Rate limiting, IP scrubbing, CAPTCHA walls, traffic shaping — all of these require visibility into the source of traffic. An onion service has none of that. The server never sees the client’s IP address; that is the entire point. So every standard DoS mitigation becomes inapplicable in one stroke.

The asymmetry goes further. When a malicious client wants to flood an onion service, it sends high-volume requests to the service’s introduction point over a single Tor circuit. But the server, upon receiving each request, must open a brand new Tor circuit to a different rendezvous point for every single one. Establishing a Tor circuit is computationally expensive: there is a full cryptographic key exchange at each hop. So the attacker pays once per circuit while the server pays once per request. This is the asymmetry that makes regular DoS against onion services so effective, and it has nothing to do with OnionFlation. It is just the baseline condition.

In 2023, these attacks reached a sustained peak. The Tor Project issued an official statement acknowledging the Tor network had been under heavy attack for seven months, and brought in additional team members specifically to design a structural fix.

How Onion Service Routing Actually Works

A quick detour is worth it here because the routing model is central to everything that follows. When you connect to a clearnet site over Tor, your traffic passes through three relays: a guard node, a middle node, and an exit node. The exit node then connects directly to the destination server, which sits outside Tor. The server’s IP address is public and the final hop is unencrypted (unless using HTTPS, but that is standard TLS at that point, nothing to do with Tor).

Onion services work differently. The server moves inside the Tor network. Before any clients connect, the server picks three ordinary Tor relays to act as introduction points and opens full three-hop Tor circuits to each of them. It then publishes a descriptor — containing its introduction points and its public key — into a distributed hash table spread across Tor’s network of directory servers. This is how clients discover how to reach the service.

When a client connects, the process looks like this:

# Simplified connection flow for an onion service

1. Client queries the distributed hash table for the onion URL
   → receives the list of introduction points

2. Client forms a 3-hop circuit to one introduction point

3. Client randomly selects a rendezvous point (any Tor relay)
   → forms a separate 2-hop circuit to it
   → sends the rendezvous point a secret "cookie" (a random token)

4. Client sends a message to the introduction point containing:
   - the rendezvous point's location
   - the cookie
   - all encrypted with the server's public key

5. Introduction point forwards the message to the server

6. Server forms a 3-hop circuit to the rendezvous point
   → presents the matching cookie

7. Rendezvous point stitches the two circuits together
   → client and server complete a cryptographic handshake
   → bidirectional encrypted communication begins

The end result is six hops total between client and server, with neither party knowing the other’s IP address. The rendezvous point is just blindly relaying encrypted traffic it cannot read. The price for this mutual anonymity is latency and, critically, the server-side cost of forming new Tor circuits on demand.

Tor onion service circuit diagram
Six hops, two stitched circuits, zero IP exposure. The elegance that also creates the attack surface.

Tor’s Answer: Proof-of-Work Puzzles (2023)

In August 2023, after months of sustained DoS attacks against the Tor network, the Tor Project deployed a new defence: proof-of-work puzzles — specified in full in Proposal 327 and documented at the onion services security reference. The mechanism is conceptually simple. Before the server forms a rendezvous circuit, the client must first solve a cryptographic puzzle. The server adjusts the puzzle difficulty dynamically based on observed load, broadcasting the current difficulty level globally via the same distributed hash table used for descriptors.

Critically, the difficulty is global, not per-client. There is a reason for this: giving any individual feedback to a single client would require forming a circuit first, which is exactly the expensive operation we are trying to avoid. So the puzzle difficulty is a single number that all prospective clients must solve before the server will engage with them.

For a legitimate user making a single connection, a few extra seconds is a minor inconvenience. For an attacker trying to flood the server with hundreds of requests per second, the puzzle cost scales linearly and quickly becomes infeasible. The approach brilliantly flips the asymmetry: instead of the server bearing the circuit-formation cost, the attacker now bears a cryptographic puzzle cost for every single request it wants to send. According to the paper, under active attack conditions without PoW, 95% of clients could not connect at all. With PoW active, connection times under the same attack were nearly indistinguishable from a non-attacked baseline. It was, by any measure, a success.

OnionFlation: Weaponising the Defence

The paper Onions Got Puzzled, presented at USENIX Security 2025, identified a fundamental flaw in how the puzzle difficulty update algorithm works. Rather than trying to overpower the puzzle system, the attacks trick the server into raising its own puzzle difficulty to the maximum value (10,000) without actually putting it under meaningful load. Once the difficulty is at maximum, even high-end hardware struggles to solve a single puzzle within Tor Browser’s 90-second connection timeout.

The researchers developed four distinct attack strategies.

Strategy 1: EnRush

The server evaluates its congestion state once every five minutes, then broadcasts a difficulty update. It cannot do this more frequently because each update requires writing to the distributed hash table across Tor’s global relay network; frequent writes would overwhelm it.

The server’s congestion check looks at the state of its request queue at the end of the five-minute window. It checks not just how many requests are queued but their difficulty levels. A single high-difficulty unprocessed request is enough to trigger a large difficulty increase, because the server reasons: “if clients are solving hard puzzles and still can’t get through, congestion must be severe.”

The EnRush attacker simply sends a small burst of high-difficulty solved requests in the final seconds of the measurement window. For the vast majority of the five-minute interval the queue was empty, but the server only checks once. It sees high-difficulty requests sitting unprocessed, panics, and inflates the difficulty to the maximum. Cost: $1.20 per inflation event.

Strategy 2: Temporary Turmoil

Instead of sending a few hard requests, the attacker floods the server with a massive volume of cheap, low-difficulty requests. This exploits a flaw in the difficulty update formula:

next_difficulty = total_difficulty_of_all_arrived_requests
                  ÷
                  number_of_requests_actually_processed

The server’s request queue has a maximum capacity. When it fills up, the server discards half the queue to make room. When this happens, the numerator (all arrived requests, including discarded ones) becomes very large, while the denominator (only successfully processed requests) remains low. The formula outputs an absurdly high difficulty. Cost: $2.80.

Strategy 3: Choking

Once the difficulty is inflated to the maximum via EnRush or Temporary Turmoil, the server limits itself to 16 concurrent rendezvous circuit connections. The attacker sends 16 high-difficulty requests but deliberately leaves all 16 connections half-open by refusing to complete the rendezvous handshake. The server’s connection slots are now occupied by dead-end circuits. No new legitimate connections can be accepted even from users who successfully solved the maximum-difficulty puzzle. Cost: approximately $2 per hour to maintain.

Strategy 4: Maintenance

After inflating the difficulty, the attacker needs to stop the server from lowering it again. The server decreases difficulty when it sees an empty queue at the measurement window. The maintenance strategy sends a small trickle of zero-difficulty requests, just enough to keep the queue non-empty. The current implementation counts requests regardless of their difficulty level, so even trivially cheap requests prevent the difficulty from dropping. Cost: 10 cents per hour.

OnionFlation four attack strategies diagram
EnRush and Temporary Turmoil inflate the difficulty; Choking and Maintenance hold it there.

The Theorem That Makes This Hard to Fix

The researchers did not just develop attacks. They also proved, mathematically, why this class of problem is fundamentally difficult to solve. This is where the paper becomes genuinely interesting beyond the exploit mechanics.

They demonstrate a perfect negative correlation between two properties any difficulty update algorithm could have:

  • Congestion resistance: the ability to detect and respond to a real DoS flood, raising difficulty fast enough to throttle the attacker.
  • Inflation resistance: the ability to resist being tricked into raising difficulty when there is no real load.

Theorem 1: No difficulty update algorithm can be simultaneously resistant to both congestion attacks and inflation attacks.

Maximising one property necessarily minimises the other. Tor’s current implementation sits at the congestion-resistant end of the spectrum, which is why OnionFlation attacks are cheap. Moving toward inflation resistance makes the system more vulnerable to genuine flooding attacks, which is what the PoW system was built to stop in the first place. As Martin notes in Clean Code, a system designed to solve one problem perfectly often creates the conditions for a new class of problem — the same logical structure applies here to protocol design.

The researchers tried five different algorithm tweaks. All of them failed to stop OnionFlation at acceptable cost. The best result pushed the attacker’s cost from $1.20 to $25 upfront and $0.50 an hour, which is still trivially affordable.

The Proposed Fix: Algorithm 2

After exhausting incremental tweaks, the researchers designed a new algorithm from scratch. Instead of taking a single snapshot of the request queue every five minutes, Algorithm 2 monitors the server’s dequeue rate: how fast it is actually processing requests in real time. This makes the difficulty tracking continuous rather than periodic, removing the window that EnRush exploits.

The algorithm exposes a parameter called delta that lets onion service operators tune their own trade-off between inflation resistance and congestion resistance. The results are considerably better:

# With Algorithm 2 (default delta):
# EnRush cost to reach max difficulty: $383/hour (vs $1.20 one-time previously)

# With delta increased slightly by the operator:
# EnRush cost: $459/hour

# Choking becomes moot because EnRush and Temporary Turmoil
# can no longer inflate the difficulty in the first place.

This is a 300x increase in attacker cost under the default configuration. The researchers tested it against the same attacker setup they used to validate the original OnionFlation attacks and found that Algorithm 2 completely prevented difficulty inflation via EnRush and Temporary Turmoil.

That said, the authors are careful to note this is one promising approach, not a proven optimal solution. The proof that no algorithm can fully resolve the trade-off still stands; Algorithm 2 just moves the dial considerably further toward inflation resistance while keeping congestion resistance viable.

Where Things Stand: Prop 362

The researchers responsibly disclosed their findings to the Tor Project in August 2024. The Tor Project acknowledged the issue and shortly afterwards opened Proposal 362, a redesign of the proof-of-work control loop that addresses the exact structural issues identified in the paper. As of the time of writing, Prop 362 is still marked open. The fix is not yet deployed.

The delay reflects the structural difficulty: any change to the global difficulty broadcast mechanism touches the entire Tor relay network, not just onion service code. Testing and rolling out changes at that scale without disrupting the live network is a non-trivial engineering problem, entirely separate from the cryptographic and algorithmic design questions.

What Onion Service Operators Can Do Right Now

The honest answer is: not much, beyond sensible hygiene. The vulnerability is in the PoW difficulty update mechanism, which operators cannot replace themselves. But the following steps reduce your exposure.

Keep Tor updated

When Prop 362 ships, update immediately. Track Tor releases at blog.torproject.org. The fix will be a daemon update.

# Debian/Ubuntu — keep Tor from the official Tor Project repo
apt-get update && apt-get upgrade tor

Do not disable PoW

Disabling proof-of-work entirely (HiddenServicePoWDefensesEnabled 0) removes the only available DoS mitigation and leaves you exposed to straightforward circuit-exhaustion flooding. OnionFlation is bad; unprotected flooding is worse. Leave it on.

Monitor difficulty in real time

If you have Tor’s metrics port enabled, you can track the live puzzle difficulty and get early warning of an inflation attack in progress:

# Watch the suggested effort metric live
watch -n 5 'curl -s http://127.0.0.1:9052/metrics | grep suggested_effort'

# Or pipe directly from the metrics port if configured
# tor config: MetricsPort 127.0.0.1:9052

A sudden jump to 10,000 with no corresponding load spike in your service logs is a strong indicator of an OnionFlation attack rather than a legitimate traffic event.

Keep your service lightweight

Algorithm 2 improves cost for the attacker considerably but does not eliminate inflation attacks entirely. Running a resource-efficient service (minimal memory footprint, fast request handling) means your server survives periods of elevated difficulty with less degradation for users who do manage to solve puzzles and connect.

Redundant introduction points

Tor allows specifying the number of introduction points (default 3, maximum as set in your Tor configuration). More introduction points spread the attack surface somewhat, though this is a marginal benefit since the OnionFlation attack operates via the puzzle difficulty mechanism, not by targeting specific introduction points.

# torrc: set higher introduction point count
# (consult your Tor version docs for exact directive)
HiddenServiceNumIntroductionPoints 5
Onion service hardening diagram
Hardening steps for onion service operators while waiting for Prop 362 to ship.

Sources and Further Reading

Video Attribution

Credit to Daniel Boctor for the original live demonstration of this attack, including compiling Tor from source to manually set the puzzle difficulty to 10,000 and showcasing the real-time impact on connection attempts. The full walkthrough is worth watching:


nJoy 😉

Unexpected Fail over warning on Couchbase servers

This is the “expected” behavior. Let me explain it, with a cluster of 3 nodes and 1 replica.

So you have started with 1 node, so in this case you have only “active documents” (no replica)

Then you add another node, and do a rebalance. Once it is done you have 50% of the active data on each node, and 50% of the replica on each node.

Let’s add a new node again, just to have a more “realistic” cluster of 3 nodes. So the node is added and cluster is rebalanced. This means now you have, as you can guess 33.33% on each node (Active and Replica)

So what you have notice is that the Rebalance is an expensive operation, since the cluster has to move data between all the nodes. (moving active and replicas).

You have a now a well balanced 3 nodes cluster.

Now you stop one node, or one node crashes… this means that some of the data are not accessible (they are still here not available, you do not lose anything).

Here you have 2 options:
– if you restart the server, nothing to do the cluster is back online entirely. (3 nodes cluster well balances)

you do a failover on the node that is off. Let’s explain this in detail.
Failover:
So what is happening here: Couchbase will do that as fast as possible to be sure all the data are available(read and write). So the only thing that is happening here is: promote the replicas to active (for the keys that were active on the node that is off now)

So what is the status now?
– all the data are accessible in read/write for the application on 2 nodes, so you have 50% of the active data on each node.
– BUT you do not have all the replicas since:
– the replicas that are on the node that is off are “not present”
– the replicas that have been promoted are not present anymore

This is why you see the message “Fail Over Warning: Rebalance required, some data is not currently replicated!” in your console.

Does it make sense?

So to be able to get back in a status that is “balanced” you need to do a rebalance.

Note: when you failover of node, this node is removed from the cluster, and to add it back you need to add it, and rebalanced. (the data that are on this server are just “ignored”)

Hope this clarify the message.

Some pointers about this:
– http://docs.couchbase.com/couchbase-manual-2.2/#couchbase-admin-tasks-failover5
– http://docs.couchbase.com/couchbase-manual-2.2/#couchbase-admin-tasks-failover-addback4

Installing vim editor.

To install vim

[root@testarossa-00-0c-29-47-8f-35 ~]# yum whatprovides vim-enhanced
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.crazynetwork.it
 * epel: fr2.rpmfind.net
 * extras: mirror.crazynetwork.it
 * updates: mirror.crazynetwork.it
2:vim-enhanced-7.0.109-7.el5.i386 : A version of the VIM editor which includes
                                  : recent enhancements.
Repo        : base
Matched from:

[root@testarossa-00-0c-29-47-8f-35 ~]# yum install vim-enhanced-7.0.109-7.el5.i386 
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.crazynetwork.it
 * epel: ftp.uni-koeln.de
 * extras: mirror.crazynetwork.it
 * updates: mirror.crazynetwork.it
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package vim-enhanced.i386 2:7.0.109-7.el5 set to be updated
--> Processing Dependency: vim-common = 2:7.0.109-7.el5 for package: vim-enhanced
--> Running transaction check
---> Package vim-common.i386 2:7.0.109-7.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package              Arch         Version                   Repository    Size
================================================================================
Installing:
 vim-enhanced         i386         2:7.0.109-7.el5           base         1.2 M
Installing for dependencies:
 vim-common           i386         2:7.0.109-7.el5           base         6.4 M

Transaction Summary
================================================================================
Install       2 Package(s)
Upgrade       0 Package(s)

Total download size: 7.7 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): vim-enhanced-7.0.109-7.el5.i386.rpm               | 1.2 MB     00:03     
(2/2): vim-common-7.0.109-7.el5.i386.rpm                 | 6.4 MB     00:18     
--------------------------------------------------------------------------------
Total                                           344 kB/s | 7.7 MB     00:22     
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : vim-common                                               1/2 
  Installing     : vim-enhanced                                             2/2 

Installed:
  vim-enhanced.i386 2:7.0.109-7.el5                                             

Dependency Installed:
  vim-common.i386 2:7.0.109-7.el5                                               

Complete!
[root@testarossa-00-0c-29-47-8f-35 ~]#

Dsabling SElinux In Centos

Sometimes and with some DB platforms especially when you are testing and want to reduce the number of variables during development, testing etc.. you don not want SELinux watching your back. While it is a must to enable SELinux in hardened production systems it can be quite a pain to handle. Sometimes it needs disabling (if for a short period), Here is how.

# Important

Changes you make to files while SELinux is disabled may give them an unexpected security label, and new files will not have a label. You may need to relabel part or all of the file system after re-enabling SELinux.

Command Line

From the command line, you can edit the /etc/sysconfig/selinux file. This file is a symlink to/etc/selinux/config. The configuration file is self-explanatory. Changing the value of SELINUX orSELINUXTYPE changes the state of SELinux and the name of the policy to be used the next time the system boots.

[root@host2a ~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - SELinux is fully disabled.
SELINUX=permissive
# SELINUXTYPE= type of policy in use. Possible values are:
#       targeted - Only targeted network daemons are protected.
#       strict - Full SELinux protection.
SELINUXTYPE=targeted

# SETLOCALDEFS= Check local definition changes
SETLOCALDEFS=0

At the prompt type :

echo 0 > /selinux/enforce

From the GUI

Use the following procedure to change the mode of SELinux using the GUI.

# Note

You need administrator privileges to perform this procedure.

 

  1. On the System menu, point to Administration and then click Security Level and Firewall to display the Security Level Configuration dialog box.
  1. Click the SELinux tab.
  2. In the SELinux Setting select either DisabledEnforcing or Permissive, and then click OK.
  3. If you changed from Enabled to Disabled or vice versa, you need to restart the machine for the change to take effect.

 

# Note

Changes made using this dialog box are immediately reflected in /etc/sysconfig/selinux.

Introduction to APT (advanced package Tools)

Advanced Packaging Tool

Ubuntu — and all Debian-based distros — includes the Advanced Packaging Tool (APT), which can be used to easily download and install software for the operating system. This article looks at APT, and how it is used.

Topics :
Table of Contents
Installing Software on a Computer
Installing Software on Windows
Installing Software on Linux
Repositories
Enabling Additional Repositories
Software Updates
Configuring Package Updates
GUI Front-Ends to APT
Add/Remove Programs
Synaptic Package Manager
Using APT from the Command-Line
apt-get
dpkg
wget
apt-cache

Introduction to Yum

1. Introduction

 

Yum is a tool for automating package maintenance for a network of workstations running any operating system that use the Red Hat Package Management (RPM) system for distributing packaged tools and applications. It is derived from yup, an automated package updater originally developed for Yellowdog Linux, hence its name: yum is “Yellowdog Updater, Modified”.

Yup was originally written and maintained by Dan Burcaw, Bryan Stillwell, Stephen Edie, and Troy Bengegerdes of Yellowdog Linux (an RPM-based Linux distribution that runs on Apple Macintoshes of various generation). Yum was originally written by Seth Vidal and Michael Stenner, both at Duke University at the time. Since then both Michael and Seth have moved on, Seth to working for Red Hat, where he remains the dominant force behind yum development and maintenance.

It is important to note that yum is an open source GPL project and that many people have contributed code, ideas, bug fixes and documentation. The AUTHORS list was up to 26 or so as of the time of this HOWTO snapshot; yum is a clear example of the power of open source develpmment!

Yum is a Gnu Public License (GPL) tool; it is freely available and can be used, modified, or redistributed without any fee or royalty provided that the terms of its associated license are followed.

1.1 What Yum Can Do

(more…)

Configure, make and make install. GNU configure and build systems.

In Linux installing software can be done in more than one way. Software installed on a platform is always reccommended to be installed from the repositories using the yum or apt tools. These tools have a lot of logic in them to check for package consistency, resolve dependencies , compare local version to the one being installed,  etc.

Yum and Apt will be discussed in other pages but suffice it to say they are the tools you must use (depending on your platform one will be preferred to the other. e.g. the rpm systems usually based on Red Hat redistributions called downstream distros use yum and Debian based distributions use apt as the preferred package manager subsystem.

There are occasions, rare on the average, but the more advanced the setup you are trying to deploy the more common this becomes, when you need to compile a later version from what your repository has. Now this is not a good practise according to stability buffs but it is a necessary one if you are going after security. This because repositories are compiled (or should be) with a certain amount of testing before a version is updated hence there is a natural lag behind the latest stable versions of any one given package. This especially true for packages that are heavily updated (usually due to security updates and bug fixes).

As always the two most important and valid reasons for running versions later than those in the repositories are :

  1. Security : a vulnerability becomes known that might compromise thsecurity of the service or system, and there has been a fix that has been tested.
  2. Features: new features are required that the older version in the repo does not support or was still in beta and has now been promoted to production ready.

On Compiling

(more…)

The /etc/passwd File Format

The /etc/passwd file stores essential information, which is required during login i.e. user account information. /etc/passwd is a text file, that contains a list of the system’s accounts, giving for each account some useful information like user ID, group ID, home directory, shell, etc. It should have general read permission as many utilities, like lsuse it to map user IDs to user names, but write access only for the superuser (root).

The anatomy of /etc/passwd

The /etc/passwd contains one entry per line (row) for each user (or user account) of the system. All fields are separated by a colon (:) symbol. Total seven fields as follows. It is one of the many database text files in NIX systems. Generally, passwd file entry looks as follows :

sample of passwd
A sample row from the /etc/passwd file

 

  1. Username: It is used when user logs in. It should be between 1 and 32 characters in length.
  2. Password: An x character indicates that encrypted password is stored in /etc/shadow file.
  3. User ID (UID): Each user must be assigned a user ID (UID). UID 0 (zero) is reserved for root and UIDs 1-99 are reserved for other predefined accounts. Further UID 100-999 are reserved by system for administrative and system accounts/groups.
  4. Group ID (GID): The primary group ID (stored in /etc/group file)
  5. User ID Info: The comment field. It allow you to add extra information about the users such as user’s full name, phone number etc. This field use by finger command.
  6. Home directory: The absolute path to the directory the user will be in when they log in. If this directory does not exists then users directory becomes /
  7. Command/shell: The absolute path of a command or shell (/bin/bash). Typically, this is a shell. Please note that it does not have to be a shell.

Viewing User List

/etc/passwdis only used for local users only. To see list of all users, enter:

$ less /etc/passwd

To search for a username called toro, enter:

$ grep toro /etc/passwd

/etc/passwd file permissions

The permissions on the /etc/passwd file should be read only to all users i.e. 644 (-rw-r–r–) and the owner must be root: $ ls -l /etc/passwdOutput:

-rw-r--r--. 1 root root 1563 Jul 13 11:03 /etc/passwd

Scanning through /etc/passwd file

One can read the /etc/passwdfile using the while loop and IFS separator as follows:

#!/bin/bash
# seven fields from /etc/passwd stored in $f1,f2...,$f7
#

while IFS=: read -r f1 f2 f3 f4 f5 f6 f7
do
     echo "User $f1 use $f7 shell and stores files in $f6 directory."
done < /etc/passwd

Another way to list all entries in the passwd database is using the getent utility.  This will show all user accounts, regardless of the type of name service used. For example, if both local and LDAP name service are used for user accounts, the results will include all local and LDAP users:

$ getent passwd

The /etc/shadow file

Passwords are not stored in /etc/passwd file the. It is stored in /etc/shadow file. In the good old days there was no great problem with this general read permission. Everybody could read the encrypted passwords, but the hardware was too slow to crack a well-chosen password, and moreover, the basic assumption used to be that of a friendly user-community, both assumptions really wrong today. Almost, all modern Linux / UNIX line operating systems use the shadow password suite, where /etc/passwd has asterisks (*) instead of encrypted passwords, and the encrypted passwords are in /etc/shadow which is readable only by the superuser.

The Linux Conundrum

A large company, was taking over our smaller company and they were on a trend to replace Linux and Java with MS Windows  ®  and ASP.NET.

When the CIO was asked why not go the other way since arguably our smaller company was more advanced put plainly his answer “Linux and Java guys are so hard to find! (and expensive). MS Windows ® guys are all over the place … ”

I liked the proposition that Linux guys are not easy to find, is this really so ..? (feel free to comment) GOOD !!  🙂

So now I know Linux/ Unix is niche, and better paid, but I cannot but ask myself the question why is this so. Is MS Windows ® so much easier or is Linux still growing into a user OS ? and why in the server business is ease of use given importance over customize-ability and tweak-ability.

Also is Linux in any deep way better that MS Windows ®. In my opinion the differences are more in the approach and the attitude of trust towards a single focal point i.e. MS in this case or on a community led by the benevolent dictator  Linus Torvalds . (By the way this is how he pronounces Linux.  [Linux])

I think there is a whole discussion behind this but money affairs aside how did we end up where we are with Linux being so popular and still perceived as difficult.

(more…)