January 17, 2020

Gksu yerine gnusu kullanın

Daha önceleri root olarak işimizi kolaylaştıran gksu yazılımını kullanıyorduk, malumunuz üzere bazı güvenlik önlemleri nedeniyle gksu kaldırıldı. Onun yerine pkexec temeliyle çalışan, adını gnusu koyduğum ve kendi buluşum olan gnusu kullanabilirsiniz. gnusu pkexec temelli çalıştığı için güvenlik sorunu yoktur, zira dağıtım geliştiricileri de pkexec kullanmamızı öneriyorlar. Sisteminizde root

January 11, 2020

Debian 10’a 5.3 Linux çekirdeği nasıl yüklenir?

Bilindiği gibi, Debian 10, 4.19 Linux çekirdeği ile birlikte geliyor. Ancak siz, Linux çekirdeğinin en son kararlı sürümü olan 5.3 sürümünü edinmek isteyebilirsiniz. Böyle bir şeyi gerçekleştirmek için yapmanız gereken ilk iş Backports deposunu etkinleştirmek olacaktır. Debian Backports deposunu etkinleştirmek için /etc/apt/sources.list dosyası üzerinde işlem yapmak gerekiyor. Düzenleme yapmadan önce /etc/apt/sources.list dosyasının bir yedeğini almak oldukça akıllaca olacaktır. Bunun için sudo cp /etc/apt/sources.list /etc/apt/sources.list.yedek komutuyla dosyanın bir yedeğini alalım. Artık /etc/apt/sources.list dosyasını düzenleyebiliriz. Bunun için Nano metin editörünü kullanacağız. Sudo kullanmıyorsanız, terminal üzerinde su komutunu da kullanabilirsiniz.

sudo nano -w /etc/apt/sources.list

komutuyla /etc/apt/sources.list dosyasını açabilirsiniz. Sudo kullanmıyorsanız, terminal üzerinde su komutunu, ardından parolanızı girerek root olabilir, sonra;

nano -w /etc/apt/sources.list

komutuyla /etc/apt/sources.list dosyasını açabilirsiniz.Ardından;

# Debian Buster Backports.

Ardından, Debian 10 Buster için yeni backports deposunu ekleyelim:

deb http://deb.debian.org/debian buster-backports main

Depo satırını ekledikten sonra Nano’yu kaydedip kapatmamız gerekiyor. BUnun için sırasıyla Ctrl + O ve Ctrl + X tuşlarına basıyoruz. Artık depoları güncelleyerek kullanıma hazır hale getirmemiz gerek. Aşağıdaki komutu çalıştırın:

sudo apt update

Debian Backports sistemde etkinleştirildiğine göre, artık yazılım depolarında 5.3 Linux çekirdeğini bulabilmemiz gerekiyor. Şu komutu girelim:

apt search linux-image

Yukarıdaki komutu çalıştırınca terminalde Debian 10 kararlı sürüm için kullanılabilen Linux çekirdeğinin çeşitli sürümlerini görebilirsiniz.

Alternatif olarak, apt araması çalıştırılabilir ya da grep komutuyla “buster-backports” a filtre uygulanabilir.

apt search linux-image | grep buster-backports

Arama sonuçlarının içinde, 5.3 Linux çekirdeğinin iki varyasyonunu görebileceksiniz.

linux-image-5.3.0-0.bpo.2-amd64

linux-image-5.3.0-0.bpo.2-cloud-amd64

Debian 10 kararlı sürümünü bir masaüstü veya dizüstü bilgisayarda çalıştırıyorsanız, linux-image-5.3.0-0.bpo.2-amd64 paketini yüklemeniz uygun olacaktır; çünkü bu sürüm, sistemin çalışması için gereken çeşitli masaüstü Linux sürücülerinin tümünü içerir. Ancak, bir sunucuda Debian kullanıyorsanız, linux-image-5.3.0-0.bpo.2-cloud’u kurmanız daha uygun olabilir. Artık Debian 10’a 5.3 Linux çekirdeğini kendi yazılım depolarından yükleyebiliriz. Bunun için apt komutunu kullanacağız:

sudo apt install linux-image-5.3.0-0.bpo.2-amd64

Ayrıca linux-headers yüklediğinizden de emin olmalısınız:

sudo apt install linux-headers-5.3.0-0.bpo.2-amd64

Debian sunucu üzerinde Linux Kernel 5.3’ü çalıştırmanız mı gerekiyor? İlk olarak, 5.3 bulut çekirdeğine mi yoksa 5.3 masaüstü çekirdeğine mi ihtiyacınız olduğunu belirlemeniz gerekir. Bunun için:

sudo apt install linux-resim-5.3.0-0.bpo.2-bulut-amd64

ya da

sudo apt install linux-image-5.3.0-0.bpo.2-amd64

komutlarından birini verebilirsiniz. Yüklediğiniz sürüme göre linux-headers yüklediğinizden de emin olmalısınız. Bunun için:

sudo apt install linux-headers-5.3.0-0.bpo.2-cloud-amd64

ya da

sudo apt install linux-headers-5.3.0-0.bpo.2-amd64

komutlarını verebilirsiniz. Yükleme işlemini tamamladıktan sonra Debian 10 sistemi yeniden başlatın. ve terminalde:

uname -a

komutunu çalıştırın. Yeni çekirdeğinizi görebilirsiniz:

January 10, 2020

İlledelinux Debian System güncellendi

(2020-1-10 tarihinde yeniden güncellendi) Geri bildirimlerden oluşan bazı düzeltmeler için kalıbı güncelledim. Gereksiz gördüğüm paketleri kaldırdım, kalıp boyutu yarı yarıya düştü. Kullanıcı seçimi için uyarladığım Package Install adlı aracı (alttaki resimde görülmektedir) geliştirerek daha kullanıcı dostu haline getirdim ve ağ yöneticisi ekledim. Bunların dışında kurulumda ağ sorunu olmaması

January 08, 2020

RestAPI ile Ovirt(Open Virtualization Manager)’da Snapshot Tabanında Otomatik Backup/Restore Operasyonu

Konu ile ilgili daha önceden Python ile yazılmış online full backup script’ini kullanarak bu işlemi gerçekleştireceğiz. https://github.com/wefixit-AT/oVirtBackup adresinden bu script’i indirebilirsiniz. BACKUP Tek yapmamız gereken bu tool’u kendi ortamımıza göre uyarlayıp, daha sonro crontab ile otomatik hale getirmek. Şimdi aşağıdaki adımlarla bu işlemin nasıl yapılacağına bir bakalım. İlk olarak oVirt ortamına yedeklemi işlemini tetikleyeceğimiz(python script’inin...

Continue Reading

January 05, 2020

How to secure your internet activity with Linux system and VPN

Hackers can access, steal and sell your online activity data as well as manipulate it if you don’t use the right system and tools. The level of protection you want will largely influence which tools and systems to use. With a Linux system and VPN, it becomes possible to hide your browsing tracks, personal information, and various other online activities. When you have the right protection in place, not even the government can access your activity. Keep reading to learn how businesses and individuals alike can use a Linux system and VPN for ongoing protection of their online data. We will also explore why this is important and why you should care about your online data. Hackers steal data for a number of reasons. Sometimes, it’s’ for their own purposes. Other times, they sell it or give it to other entities. These entities may or may not have known about the data collection processes the hackers use to gather the data.

What Is a VPN?

VPN stands for virtual private network, which means it provides encryption, making it difficult for the bad guys to steal your data when visiting their sites. It also acts as an added layer of protection against the government from tracking your online activity.

In some areas, a VPN even grants users access to certain content that is not normally available in their geographical areas. Such forms of content often include video, international gaming, certain servers, etc.

The VPN works to protect your online activity by making it appear as if you are logged in from a different location. As soon as you connect to the VPN, you can set your location to anywhere in the world.

Additionally, with a Linux system, you can improve the safety and protection of your data thanks to advanced security measures. Fixes for Linux program exploits made by hackers are generally developed and released well before other operating systems develop and release fixes for their equivalent programs.

How Does a Linux System Make Online Activity More Secure?

Getting fixes to exploits is of the utmost importance in both personal and business settings, particularly those sitting on large amounts of data.

Hackers, crackers, and phreakers steal people’s online data all the time for multiple reasons. Some do it to fight a cause, some steal it unintentionally, some do it for fun, a few do it for commercial espionage or sabotage, and lastly, it’s not uncommon for disgruntled employees to steal data for whistle blowing purposes.

A Linux system helps avoid several types of attacks:

  • Reading data
  • Denial of service
  • Altering/manipulating data
  • Access to system

Tips for Increasing Data Protection With a Linux System

To increase data protection through the use of a Linux system, you must first pinpoint what you mean by “secure.” To do this, you must assess what you intend to do with the system and just how secure you need the data to be. In most cases, Linux systems need security, at a minimum, in the following areas:

  • Authorization: Do not allow ANYONE access to the system unless they NEED access
  • Verification: Make sure users have to go through a 2-step authentication process to verify their identity each time logging into the system
  • Integrity: All personal information must be protected and NEVER compromised
  • Non-repudiation: Must have proof of receipt of data; official receipt showing how you received the data and from whom
  • Privacy/confidentiality: You must abide by any privacy and confidentiality regulations such as the ISO 7984-2 International Standards Organization Security Standard
  • Availability: System must be able to perform its required function at all times during normal operating hours while maintaining security around the clock.

Choose a Native App

When installing a VPN on a Linux system, you will have two options: Open-source or native app. With a native app, you will get access to more features and less required configuration.

Because of this, it is highly suggested that any VPN you use at least comes in the form of a native client for Linux.

In addition to the dedicated app, users of a VPN that comes in the form of a native client enjoy sophisticated security, ultra-fast speeds, and the ability to run on a command-line interface. Additionally, the server list is always kept up to date, making it simple to download and switch between UDP to TCP over the Open VPN protocol.

Run Through Services and Customize Each of Them

When using Linux as a VPN, you will have several types of facilities to choose from, including mail and WWW. Linux handles some of these services through a system of ports.

Take for example Port 21, which controls FTP. You can check out service names in the /etc/services file for a map of port numbers.

It’s ideal to have most of your services running through a configuration file /etc/inetd.conf. You’ll also want to take a lot of time when running through this type of file as you will have the ability to customize how each of the available services is running and protected.

Keep Services in inetd.conf Turned OFF

Check the services in inetd.conf, and make sure they are not set to turn on by default. To achieve maximum security, you must turn them off. You can type the command netstat -vat to see which services are currently running on your Linux or alternatively, you can use ss command. For any services that you are unfamiliar with, make sure to look them up in /etc/inetd.conf.

Final Thoughts

There are numerous VPNs to choose from. The surfshark.com VPN is especially ideal for those who want to unblock lots of region-locked content from sources such as Netflix, Amazon Prime Video and Hulu.

Users of this VPN are also huge fans of their ability to connect to the VPN through an unlimited number of devices. This is an example of a VPN that has the features you should look for when researching for ways to use a Linux system to secure internet activity.

Anonymized Data Is Not Anonymous

We have all more or less accepted that we are living in some kind of dime-store George Orwell novel where our every movement is tracked and recorded in some way. Everything we do today, especially if there’s any kind of gadget or electronics involved, generates data that is of interest to someone. That data is constantly being gathered and stored, used by someone to build up a picture of the world around us. The average person today is much more aware of the importance of their own data security. We all understand that the wrong data in the wrong hands can be used to wreak havoc on both individuals and society as a whole. Now that there is a much greater general awareness of the importance of data privacy, it is much more difficult for malicious actors to unscrupulously gather sensitive data from us, as most people know not to hand it over.

Data Protection Laws

In most jurisdictions, there are laws and regulations in place that govern how personal data can be collected, stored, shared, and accessed.

While these laws are severely lacking in a number of areas, the trend in recent years has been to increasingly protect individuals from corporate negligence and excess, which has been welcomed by most consumers.

Probably the best-known data protection law is the famed GDPR, or the General Data Protection Regulation which came into force in 2018. Though in theory it has power only within the EU, in practice the law applies to every company that deals with EU citizens.

Its strict privacy requirements have made many businesses reconsider how they handle data, threatening misbehavers with fines that can climb into billions of euros (up to 4% of the company’s annual turnover).

Unlike the EU, the US has no single regulation on the federal level to protect the data of its citizens. Acknowledging that, some states have released their own privacy laws.

Probably the most extensive of them to date is the CCPA, or the California Consumer Privacy Act.

The act will come into power beginning with 2020 and grant the citizens of California many of the same rights that EU citizens have come to enjoy.

It will allow Californians to know what data is collected about them, where it is used, say no to selling their data, and request to delete it.

Anonymized Data

One common theme that has emerged in the regulations from different jurisdictions is the notion of anonymized data. As the name implies, this is data that cannot be tied to a specific individual.

A set of anonymized data might be presented as belonging to a particular individual, but the identity of the subject is not revealed in the data.

Data anonymization presents an attractive common ground between the rights of consumers and those that want to make use of their personal data.

After all, information about who we are and what we do has long been the driving force behind many of today’s largest companies, including Google, Facebook, and Amazon.

But private corporations are not the only beneficiaries of our data. Removing any personally identifiable information from a dataset and anonymizing it, researchers are able to work with large and detailed datasets that contain a wealth of information without having to compromise any individual’s privacy.

By anonymizing data, we are also able to encourage people to share data that they would otherwise hold on to. Businesses and governments can access and trade vast amounts of data without infringing anyone’s privacy, thanks to anonymization.

Meanwhile, users don’t have to worry about data they generate being recorded and revealing information about them personally.

Data Anonymization Techniques

There are many ways to anonymize data, varying in cost and difficulty.

Perhaps the easiest technique is simply to remove some of the user’s direct identifiers. This is basically your main personal information. For instance, an insurance company could delete a customer’s name, date of birth, and call the data as good as anonymized.

Another method is to generalize the data of multiple users to reduce their precision. For instance, you could remove the last digits of a postcode or present a person’s age in a range rather than the exact number.

It is one of the methods Google uses to achieve k-anonymity – this elaborate term simply means that a certain number of people (defined by the letter k) should share the same property, such as ZIP code.

One more way is to include noise into the dataset. By noise I mean swapping around the information about certain properties between individuals or groups.

For example, this method could switch your car ownership details with another person. Your profile would change, but the whole dataset would remain intact for statistical analysis.

Finally, you can further protect the anonymized data you need to share by sampling it – that is, releasing the dataset in small batches. In theory, sampling helps to reduce the risk of re-identification.

Even if the data is enough to identify you as an individual, statistically there should be at least several other people with the same characteristics as you. Without having the whole dataset, there is no way to tell which person it really is.

Other data anonymization techniques exist, but these are some of the main ones.

Deanonymization

So, anonymization makes everyone a winner, right? Well, not quite.

Anyone who has worked extensively with data can testify as to just how little information is needed to identify a specific individual out of a database of many thousands.

One of the consequences of the massive volumes of data that now exists on all of us is that different data sources can be cross-referenced to identify common elements.

In some cases, this cross-referencing can instantly deanonymize entire data sets, depending on how exactly they have been anonymized.

Researchers were able to recover surnames of US males from a database of genetic information by simply making use of publicly available internet resources.

A publicly available dataset of London’s bike-sharing service could be used not only to track trips but also who actually made them.

Anonymized Netflix movie ratings were mapped to individuals by cross-referencing them with IMDB data, thus revealing some very private facts about users. These are only a few of the many similar examples.

Since the introduction of the GDPR, a number of businesses have been looking for ways of continuing to handle large volumes of customer data without falling afoul of the new regulations.

Many organizations have come to view anonymized datasets as a means of potentially circumventing the regulations. After all, if data isn’t tied to specific individuals, it can’t infringe on their privacy.

No Such Thing as Anonymous

According to new research conducted by researchers from Imperial College London, along with their counterparts at Belgium’s Université Catholique de Louvain, it is incredibly hard to properly deanonymize data.

In order for data to be completely anonymous, it needs to be presented in isolation. You can use VPN or change your IP address (more information about proxy servers you can find on Proxyway), etc.

If enough anonymized data is given about an individual, all it takes is a simple cross-reference with other databases to ascertain who the data concerns.

Using their own prediction model, the researchers made a startling discovery: it would take only 15 pieces of demographic information to re-identify 99.98% of Americans.

What is more, only four base attributes (ZIP code, date of birth, gender, and number of children) would be needed to confidently identify 79.4% of the entire state of Massachusetts. According to the study, releasing data in small samples is not enough to protect an individual from detection.

Bearing in mind that researchers can deanonymize the records of an entire state, data brokers like Experian are selling anonymized data sets that contain hundreds of data points for each individual.

According to the researchers’ work, this data is anonymized in name only and anyone with the capacity to handle large datasets also has the resources to easily deanonymize them.

It doesn’t matter what methods are used to anonymize data. Even the more advanced techniques like k-anonymity might not be sufficient – not to mention that they are expensive.

In most cases, all that happens is that only immediately identifiable data like names and addresses are removed. This is far from enough.

The researchers’ findings urge us not to fall into a false sense of security. They also challenge the methods companies use to anonymize data in light of the strict regulatory requirements set forth by the GDPR and the forthcoming CCPA.

Wrap-Up

The long battle to get the average internet user to care about their data and privacy has been a tiring one. Anyone who has worked in cybersecurity over the last couple of decades can testify as to how much things have improved, but there is still a long way to go.

The notion that people’s data can be anonymized and rendered harmless is both incorrect and dangerous. It is important that people properly understand the implications of handing their data over. Don’t give up your data under the false impression that it can’t be tied to you.

December 30, 2019

Everything There Is To Know About Online Security

Online security is a major topic of discussion nowadays, with so many threats to your privacy (and even livelihood in some cases). Thanks to the ever-changing nature of technology, those dangers evolve right alongside it. So, while a truly “complete” guide isn’t achievable, we’ve done our best to cover all bases. Note that stuff like “use an antivirus” and “always update your software” should be common sense by now – so we won’t hammer on about those. HTTPS is the Secured version of the HyperText Transfer Protocol (HTTP) that lets you view pages in the first place. It uses SSL/TLS encryption to make sure the connection between you and the websites you browse remains private, including any passwords and sensitive data you transmit.

The Basics: HTTPS

HTTPS is the Secured version of the HyperText Transfer Protocol (HTTP) that lets you view pages in the first place. It uses SSL/TLS encryption to make sure the connection between you and the websites you browse remains private, including any passwords and sensitive data you transmit.

Despite all this fancy phrasing, it’s as simple as using websites that have a (usually) green padlock next to the address bar.

You don’t need to go to extreme lengths to have some basic protection. Just use HTTPS websites exclusively and you already have your first line of defense.

There’s even a browser add-on called HTTPS Everywhere from the Electronic Frontier Foundation that attempts to force an HTTPS connection where possible.

Websites that don’t use HTTPS are punished in search result rankings by Google, while Mozilla has been phasing out features for non-secure websites. All of this is an orchestrated effort by such organizations to encrypt the entire Internet and make it safer to browse.

Obviously, companies like Google don’t have the best track record when it comes to your online privacy – but we can appreciate them doing some good on every once in a while.

Their business model relies primarily on advertisements and mass data collection, so let’s see look at how those can affect you.

Ads Can Get You in Trouble

Let’s be honest, nobody really ‘likes’ ads – but we do love supporting content creators in any way we can. Don’t be in such a hurry to disable your ad-blocker on your favorite news site or while watching YouTube, though.

Why? Well, just take a look at what happened in 2016 to such major sites as the New York Times, BBC, and the NFL. In short, their ads contained a strain of ransomware that encrypted the victims’ hard drives in exchange for a Bitcoin ransom.

Keep in mind: these aren’t just some sketchy websites where you’d expect malware from a mile away.

The major stinger is that people didn’t even need to click the ads for the attack to happen, according to Malwarebytes. Sure, the targeted people had out of date software with security holes – but who’s to say when an “updated” program will be hit next?

If you haven’t already, be sure to get a good ad-blocking extension for your browser. Maybe a script-blocker as well, considering the number of malicious JavaScript attacks out there. A couple of great recommendations in the section below.

uBlock Origin and uMatrix

This duo of browser add-ons is a godsend to anyone who despises ads, pop-ups, auto-playing videos, and any other Internet nuisances.

They were both created by Raymond Hill, who not only works on and provides them for free, but he explicitly won’t accept donations of any kind.

Performance-wise, uBlock Origin (uBO) was benchmarked against AdBlock Plus (ABP) and it’s pretty clear who the winner is. Moreover, it has no “acceptable ads” program like ABP, where advertisers pay them to whitelist their ads.

Depending on which filter lists you use (and there are plenty of them), uBO will also block ad tracking scripts that, well, track your browsing habits.

uMatrix has much of the same functionality, though it also allows you to block anything a website might throw at you:

  • Cookies
  • Audio and video media, and even images
  • Scripts, XHR, CSS elements, and frames

The fact that it stops requests from the domains you blacklist, across all websites, means you can get around Facebook’s “unavoidable” tracking.

You know; the thing that knows your browsing habits even if you don’t have a Facebook account – just because a page has a Like/Share button. Just a neat example of how to use uMatrix to preserve your privacy.

As a word of warning, this extension is geared towards advanced users. Don’t worry though; once you use it for several websites it’ll become second nature.

Everyone’s out for Your Data

We wish this was an exaggeration, but just look at how many people want your browsing habits for various reasons:

  • Internet Service Providers have been selling your browsing and location data for a profit
  • Government surveillance is at an all-time high, and more people are recognizing it since the Snowden revelations in 2013
  • Hacker numbers are increasing, with over 4 billion records exposed in the first half of 2019 alone
  • Almost 80% of websites have some form of ad tracking installed (which you can block with the previously mentioned add-ons)

It’s no wonder that nearly 25% of total Internet users use a Virtual Private Network (VPN) nowadays. If you’re not up to speed, a VPN encrypts (i.e. obfuscates) your data, making it unreadable to anybody who does not have the cryptographic key.

This means none of the four “usual suspects” above can see what you’re doing online. Moreover, any sensitive operations such as online banking, payments, and logging in to various services will be safe from hacking attempts.

On a minor downside, using a VPN tends to slow down your connection due to multiple factors – the distance between you and the server, the encryption/decryption process depends on your CPU power, and so on.

Fortunately, a super-fast VPN like ExpressVPN can help alleviate that. Since they have servers in 94 countries, it’s super easy to find one close to you – even when traveling abroad.

Free Wi-Fi = Free Hackers

Speaking of traveling – everyone loves using free Wi-Fi, especially on vacation. But have you ever noticed that your local café or that hotel you were staying at had two networks with the exact same name? Then you’ve most likely had an encounter with “Evil Twin” Wi-Fi hotspots.

Basically, hackers rely on peoples’ excitement for free stuff, so they create their own hotspots that mimic the real thing. Once you’re connected, your data is as good as stolen. Unless you use a VPN to encrypt it before leaving your device, that is.

In fact, this method was recommended by the Wi-Fi Alliance itself, since cyber criminals make it next to impossible to distinguish between a legitimate hotspot and a fake one. They even go as far as using the same SSID name and cloning the MAC address of the network.

Using a VPN is also a good idea even if you’re 100% sure that you’re connecting to the real thing, and the network is password-protected.

The reason being that both WPA2 and WPA3 (the current and latest Wi-Fi encryption protocols) suffer from security exploits that even an average-level hacker can profit from.

Take Care of Your Passwords

You wouldn’t think “password” would break the top 5 most common passwords, but it does. The top one is “123456” just for comparison. Your takeaway from here should be: never use weak passwords for your accounts. Oh, and don’t re-use them for others either.

Use a good password manager to help you create and store strong passwords that can’t be brute-forced in 5 minutes by a bored teenager and a video tutorial. As a side benefit, using a pass manager helps you avoid phishing scams.

Here’s how it goes down:

  • Cybercriminals create a fake website that mimics legitimate services (PayPal, home banking, etc.)
  • They send you an email saying you need to update your info and provide a link to their fake site
  • Then they wait for you to type in your login info willingly

Fortunately, your password manager literally won’t input your login details because it can’t recognize the website as the correct one. Hackers are pretty crafty with their fakes nowadays, but this way they can’t rely on human error for their schemes.

Multi-Factor Authentication (MFA)

Many of these hacking attempts can be stopped in their tracks by simply having SMS two-factor authentication (2FA) enabled. It’s not the best choice, but as many security guides will tell you: “it’s better than nothing.”

The better option is to use an authenticator app such as Authy, Google Authenticator, and others. There are also hardware authenticator tokens that you can just plug in your USB or hold against your phone for the same effect.

Watch Out for Voicemail

What does voicemail have to do with online security? A lot, as it turns out. Since many people don’t bother to secure their voicemail account with a long password, hackers can simply use a brute-force attack to gain access to it.

Then, by using the password reset function on your accounts, they can ask for the reset tokens to be sent through a voice call. All they must do is make sure that call never reaches you and goes to voicemail instead. Voila, your account has been hacked.

Text-based 2FA won’t protect you in this case, so the best thing to do would be to disable your voicemail entirely. You may also call your own phone carrier and ask for assistance with this issue if yours isn’t on the list.

If you really want to keep voicemail around, you need to protect it with a long random password as we mentioned. iPhone users simply need to go to Settings > Phone > Change Voicemail Password.

Use Encrypted Email Services

We’ve mentioned Google’s anti-privacy practices in the beginning. And while they say they’ve stopped reading your emails, the Wall Street Journal says otherwise. Practices of this kind are all fairly well documented for these big tech giants – there’s no secret here.

So if you don’t like your private life spied on by some poorly paid contractor somewhere, consider switching to an encrypted email provider.

Since your emails are encrypted, not even the providers themselves can read them. Even if hackers somehow breached their databases, all they’d find is undecipherable gibberish.

ProtonMail is a good choice, but there are plenty of others out there if you need something different. Ultimately, they all allow you to keep your business between you and the recipient.

Dealing with Social Media

There is no expectation of privacy on social media. Don’t look at us – those words were from Facebook counsel Orin Snyder. While that’s a heavy-handed way of putting it, it’s 100% true.

The only logical way of dealing with your social accounts (if you need online privacy and security) is to delete them.

If you need to keep them for whatever reason, you can at least control how much data they have on you. To avoid being a victim to the next Cambridge Analytica, these are your only two options. Now, you can make it easier to clean up your socials with a couple of apps.

The first one is Jumbo for iOS and Android. Not only can it set all your privacy settings on most services to “maximum” without collecting any data, but it can also delete your Tweets (3200 at a time; that’s a Twitter limitation), old Facebook posts, and even Amazon Alexa recordings.

Another one is MyPermissions, which allows you to see what apps you’ve connected to your Facebook, Twitter, and other accounts.

They can be viewed, removed, and reported (if you find anything fishy) in a single interface. You can also change the data access privileges on the apps if you intend to keep them.

Don’t want yet another phone app? Social Post Book Manager (Chrome extension) and TweetDelete are great alternatives to delete those embarrassing college posts.

Linux find command tutorial (with examples)

When it comes to locating files or directories on your system, the find command on Linux is unparalleled. It’s simple to use, yet has a lot of different options that allow you to fine-tune your search for files. Read on to see examples of how you can wield this command to find anything on your system. Every file is only a few keystrokes away once you know how to use the find command in Linux. You can tell the find command to look specifically for directories with the -type d option. This will make find command only search for matching directory names and not file names. Since hidden files and directories in Linux begin with a period, we can specify this search pattern in our search string in order to recursively list hidden files and directories.

Find a directory

You can tell the find command to look specifically for directories with the -type d option. This will make find command only search for matching directory names and not file names.

find /path/to/search -type d -name "name-of-dir"

Find directory

Find hidden files

Since hidden files and directories in Linux begin with a period, we can specify this search pattern in our search string in order to recursively list hidden files and directories.

find /path/to/search -name ".*"

Find files of a certain size or greater than X

The -size option on find allows us to search for files of a specific size. It can be used to find files of an exact size, files that are larger or smaller than a certain size, or files that fit into a specified size range. Here are some examples:

Search for files bigger than 10MB in size:

find /path/to/search -size +10M

Search for files smaller than 10MB in size:

find /path/to/search -size -10M

Search for files that are exactly 10MB in size:

find /path/to/search -size 10M

Search for files that are between 100MB and 1GB in size:

find /path/to/search -size +100M -size -1G

Find from a list of files

If you have a list of files (in a .txt file, for example) that you need to search for, you can search for your list of files with a combination of the find and grep commands. For this command to work, just make sure that each pattern you want to search for is separated by a new line.

find /path/to/search | grep -f filelist.txt

The -f option on grep means “file” and allows us to specify a file of strings to be matched with. This results in the find command returning any file or directory names that match those in the list.

Find not in a list

Using that same list of files we mentioned in the previous example, you can also use find to search for any files that do not fit the patterns inside the text file. Once again, we’ll use a combination of the find and grep command; we just need an additional option specified with grep:

find /path/to/search | grep -vf filelist.txt

The -v option on grep means “inverse match” and will return a list of files that don’t match any of the patterns specified in our list of files.

Set the maxdepth

The find command will search recursively by default. This means that it will search the specified directory for the pattern you specified, as well as any and all subdirectories within the directory you told it to search.

For example, if you tell find to search the root directory of Linux (/), it will search the entire hard drive, no matter how many subdirectories of subdirectories exist. You can circumvent this behavior with the -maxdepth option.

Specify a number after -maxdepth to instruct find on how many subdirectories it should recursively search.

Search for files only in the current directory and don’t search recursively:

find . -maxdepth 0 -name "myfile.txt"

Search for files only in the current directory and one subdirectory deeper:

find . -maxdepth 1 -name "myfile.txt"

Find empty files (zero-length)

To search for empty files with find, you can use the -empty flag. Search for all empty files:

find /path/to/search -type f -empty

Search for all empty directories:

find /path/to/search -type d -empty

It is also very handy to couple this command with the -delete option if you’d like to automatically delete the empty files or directories that are returned by find.

Delete all empty files in a directory (and subdirectories):

find /path/to/search -type f -empty -delete

Find largest directory or file

If you would like to quickly determine what files or directories on your system are taking up the most room, you can use find to search recursively and output a sorted list of files and/or directories by their size.

How to show the biggest file in a directory:

find /path/to/search -type f -printf "%s\t%p\n" | sort -n | tail -1

Notice that the find command was sorted to two other handy Linux utilities: sort and tail. Sort will put the list of files in order by their size, and tail will output only the last file in the list, which is also the largest.

You can adjust the tail command if you’d like to output, for example, the top 5 largest files:

find /path/to/search -type f -printf "%s\t%p\n" | sort -n | tail -5

Alternatively, you could use the head command to determine the smallest file(s):

find /path/to/search -type f -printf "%s\t%p\n" | sort -n | head -5

If you’d like to search for directories instead of files, just specify “d” in the type option. How to show the biggest directory:

find /path/to/search -type d -printf "%s\t%p\n" | sort -n | tail -1

Find setuid set files

Setuid is an abbreviation for “set user ID on execution” which is a file permission that allows a normal user to run a program with escalated privileges (such as root).

This can be a security concern for obvious reasons, but these files can be easy to isolate with the find command and a few options.

The find command has two options to help us search for files with certain permissions: -user and -perm. To find files that are able to be executed with root privileges by a normal user, you can use this command:

find /path/to/search -user root -perm /4000

Find suid files

In the screenshot above, we included the -exec option in order to show a little more output about the files that find returns with. The whole command looks like this:

find /path/to/search -user root -perm /4000 -exec ls -l {} \;

You could also substitute “root” in this command for any other user that you want to search for as the owner. Or, you could search for all files with SUID permissions and not specify a user at all:

find /path/to/search -perm /4000

Find sgid set files

Finding files with SGID set is almost the same as finding files with SUID, except the permissions for 4000 need to be changed to 2000:

find /path/to/search -perm /2000

You can also search for files that have both SUID and SGID set by specifying 6000 in the perms option:

find /path/to/search -perm /6000

List files without permission denied

When searching for files with the find command, you must have read permissions on the directories and subdirectories that you’re searching through. If you don’t, find will output an error message but continue to look throughout the directories that you do have permission on.

Permission denied

Although this could happen in a lot of different directories, it will definitely happen when searching your root directory.

That means that when you’re trying to search your whole hard drive for a file, the find command is going to produce a ton of error messages.

To avoid seeing these errors, you can redirect the stderr output of find to stdout, and pipe that to grep.

find / -name "myfile.txt" 2>%1 | grep -v "Permission denied"

This command uses the -v (inverse) option of grep to show all output except for the lines that say “Permission denied.”

Find modified files within the last X days

Use the -mtime option on the find command to search for files or directories that were modified within the last X days. It can also be used to search for files older than X days, or files that were modified exactly X days ago.

Here are some examples of how to use the -mtime option on the find command:

Search for all files that were modified within the last 30 days:

find /path/to/search -type f -mtime -30

Search for all files that were modified more than 30 days ago:

find /path/to/search -type f -mtime +30

Search for all files that were modified exactly 30 days ago:

find /path/to/search -type f -mtime 30

If you want the find command to output more information about the files it finds, such as the modified date, you can use the -exec option and include an ls command:

find /path/to/search -type f -mtime -30 -exec ls -l {} \;

Sort by time

To sort through the results of find by modified time of the files, you can use the -printf option to list the times in a sortable way, and pipe that output to the sort utility.

find /path/to/search -printf "%T+\t%p\n" | sort

This command will sort the files older to newer. If you’d like the newer files to appear first, just pass the -r (reverse) option to sort.

find /path/to/search -printf "%T+\t%p\n" | sort -r

Difference between locate and find

The locate command on Linux is another good way to search for files on your system. It’s not packed with a plethora of search options like the find command is, so it’s a bit less flexible, but it still comes in handy.

locate myfile.txt

The locate command works by searching a database that contains all the names of the files on the system. The database that it searches through is updated with the upatedb command.

Since the locate command doesn’t have to perform a live search of all the files on the system, it’s much more efficient than the find command. But in addition to the lack of options, there’s another drawback: the database of files only updates once per day.

You can update this database of files manually by running the updatedb command:

updatedb

The locate command is particularly useful when you need to search the entire hard drive for a file, since the find command will naturally take a lot longer, as it has to traverse every single directory in real-time.

If searching a specific directory, known to not contain a large number of subdirectories, it’s better to stick with the find command.

CPU load of find command

When searching through loads of directories, the find command can be resource-intensive. It should inherently allow more important system processes to have priority, but if you need to ensure that the find command takes up fewer resources on a production server, you can use the ionice or nice command.

Monitor CPU usage of the find command:

top

Reduce the Input/Output priority of find command:

ionice -c3 -n7 find /path/to/search -name "myfile.txt"

Reduce the CPU priority of find command:

nice -n 19 find /path/to/search -name "myfile.txt"

Or combine both utilities to really ensure low I/O and low CPU priority:

nice -n ionice -c2 -n7 find /path/to/search -name "myfile.txt"

I hope you find the tutorial useful. Keep coming back.

December 29, 2019

Raspberry Pi ve sSMTP

Merhabalar.

Kimilerimiz evlerimizde veya işyerlerimizde raspbery pi kullanmaya başladı.
Yahut ta bir VPS (sanal sunucu) sahibisinizdir. Bu sunucu Şirketiniz bünyesinde bulunuyor olabilir.
Her nekadar sanal da olsa veya fiziksel bir sunucunuz olsun sisteminizi takip etmelisiniz.
Elbette devamlı surette sunucunuza bağlı kalmanız mümkün değildir.

Sunucunuzun size acil durumlarda ve cron görevleri, sisteme login gibi durumlarda  e-posta gönderebilmelidir, değilse sizin hiç bir şeyden haberiniz olamaz.

Bunun için öncelikle bir mail hesabınızın olması gereklidir, gmail olabilir yandex olabilir.
yandex olmasını öneririm. Getgnu.org sunucusunun e-postalarını 5 yıldır problemsizce gönderiyor 🙂

Tabi ki sunucunuzda e-posta gönderebilmek için bazı paketler kurmanız gerekmektedir.

Yandex üzerinden gönderebilmek için ssmtp paketi gereklidir.
ssmtp bize varolan smtp hesabı üzerinden e-posta göndermemizi sağlayacak.

Tabiki Mail yazabilmek için mailx gereklidir.
Bakalım paket depomuzda mailx varmı.

root@rihanna ~ # apt-cache search mailx
.
heirloom-mailx - feature-rich BSD mail(1)
mailutils - GNU mailutils utilities for handling mail
root@rihanna ~ #

UYARI!
Eğer sunucunuzda bir MTA (postfix, qmail, exim, sendmail) kurulu ise ssmtp kurulumunu yapmayın !
ssmtp kurulumu ile varolan mail MTA’ya zarar verebilirsiniz.     Sorumluluk Kabul edilmez.

 

root@rihanna ~ # apt-get install -y heirloom-mailx ssmtp

Kurulum tamanlandıktan sonra

root@rihanna:~# nano /etc/ssmtp/ssmtp.conf

bu içeriği düzenleyip yapıştırın

# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
# Where will the mail seem to come from?
rewriteDomain=rihanna.FalancaFilanca.org.tr

# The full hostname
hostname=rihanna.FalancaFilanca.org.tr

# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
#FromLineOverride=YES

mailhub=smtp.yandex.com:587
AuthUser=FalancaFilanca@yandex.com
AuthPass=Parola_Parolam
UseSTARTTLS=YES

sSMTP ayarları ve mailx kurulumu bu kadar.

Şimdi bir test -epostası gönderelim.

 

 

 

mail -s "Merhaba" kime@FalanFilan.org.tr

Selamlar.

.

Mail gönderimi bu şekildedir yazmayı bitirdikten sonra bir alt satıra geçilip . ile bitiriyoruz.

Hepsi bu kadar.

Yorum ve sorularınızı beklerim.

Mutt + Gmail

RaspBerry Pi – mailx

 

15+ examples for Linux cURL command

In this tutorial, we will cover the cURL command in Linux. Follow along as we guide you through the functions of this powerful utility with examples to help you understand everything it’s capable of. The cURL command is used to download or upload data to a server, using one of its 20+ supported protocols. This data could be a file, email message, or web page. cURL is an ideal tool for interacting with a website or API, sending requests and displaying the responses to the terminal or logging the data to a file. Sometimes it’s used as part of a larger script, handing off the retrieved data to other functions for processing. Since cURL can be used to retrieve files from servers, it’s often used to download part of a website. It performs this function well, but sometimes the wget command is better suited for that job. We’ll go over some of the differences and similarities between wget and cURL later in this article. We’ll show you how to get started using cURL in the sections below.

Download a file

The most basic command we can give to cURL is to download a website or file. cURL will use HTTP as its default protocol unless we specify a different one. To download a website, just issue this command:

curl http://www.google.com

Of course, enter any website or page that you want to retrieve.

curl basic command

Doing a basic command like this with no extra options will rarely be useful, because this only tells cURL to retrieve the source code of the page you’ve provided.

curl output

When we ran our command, our terminal is filled with HTML and other web scripting code – not something that is particularly useful to us in this form.

Let’s download the website as an HTML document instead, that way the content can be displayed. Add the –output option to cURL to achieve this.

curl www.likegeeks.com --output likegeeks.html

curl output switch

Now the website we downloaded can be opened and displayed in a web browser.

downloaded website

If you’d like to download an online file, the command is about the same. But make sure to append the –output option to cURL as we did in the example above.

If you fail to do so, cURL will send the binary output of the online file to your terminal, which will likely cause it to malfunction.

Here’s what it looks like when we initiate the download of a 500KB word document.

curl download document

The word document begins to download and the current progress of the download is shown in the terminal. When the download completes, the file will be available in the directory we saved it to.

In this example, no directory was specified, so it was saved to our present working directory (the directory from which we ran the cURL command).

Also, did you notice the -L option that we specified in our cURL command? It was necessary in order to download this file, and we go over its function in the next section.

Follow redirect

If you get an empty output when trying to cURL a website, it probably means that the website told cURL to redirect to a different URL. By default, cURL won’t follow the redirect, but you can tell it to with the -L switch.

curl -L www.likegeeks.com

curl follow redirect

In our research for this article, we found it was necessary to specify the -L on a majority of websites, so be sure to remember this little trick. You may even want to append it to the majority of your cURL commands by default.

Stop and resume download

If your download gets interrupted, or if you need to download a big file but don’t want to do it all in one session, cURL provides an option to stop and resume the transfer.

To stop a transfer manually, you can just end the cURL process the same way you’d stop almost any process currently running in your terminal, with a ctrl+c combination.

curl stop download

Our download has begun, but was interrupted with ctrl+c, now let’s resume it with the following syntax:

curl -C - example.com/some-file.zip --output MyFile.zip

The -C switch is what resumes our file transfer, but also notice that there is a dash (-) directly after it. This tells cURL to resume the file transfer, but to first look at the already downloaded portion in order to see the last byte downloaded and determine where to resume.

resume file download

Our file transfer was resumed and then proceeded to finish downloading successfully.

Specify timeout

If you want cURL to abandon what it’s doing after a certain amount of time, you can specify a timeout in the command. This is especially useful because some operations in cURL don’t have a timeout by default, so one needs to be specified if you don’t want it getting hung up indefinitely.

You can specify a maximum time to spend executing a command with the -m switch. When the specified time has elapsed, cURL will exit whatever it’s doing, even if it’s in the middle of downloading or uploading a file.

cURL expects your maximum time to be specified in seconds. So, to timeout after one minute, the command would look like this:

curl -m 60 example.com

Another type of timeout that you can specify with cURL is the amount of time to spend connecting. This helps make sure that cURL doesn’t spend an unreasonable amount of time attempting to contact a host that is offline or otherwise unreachable.

It, too, accepts seconds as an argument. The option is written as –connect-timeout.

curl --connect-timeout 60 example.com

Using a username and a password

You can specify a username and password in a cURL command with the -u switch. For example, if you wanted to authenticate with an FTP server, the syntax would look like this:

curl -u username:password ftp://example.com

curl authenticate

You can use this with any protocol, but FTP is frequently used for simple file transfers like this.

If we wanted to download the file displayed in the screenshot above, we just issue the same command but use the full path to the file.

curl -u username:password ftp://example.com/readme.txt

curl authenticate download

Use proxies

It’s easy to direct cURL to use a proxy before connecting to a host. cURL will expect an HTTP proxy by default, unless you specify otherwise.

Use the -x switch to define a proxy. Since no protocol is specified in this example, cURL will assume it’s an HTTP proxy.

curl -x 192.168.1.1:8080 http://example.com

This command would use 192.168.1.1 on port 8080 as a proxy to connect to example.com.

You can use it with other protocols as well. Here’s an example of what it’d look like to use an HTTP proxy to cURL to an FTP server and retrieve a file.

curl -x 192.168.1.1:8080 ftp://example.com/readme.txt

cURL supports many other types of proxies and options to use with those proxies, but expanding further would be beyond the scope of this guide. Check out the cURL man page for more information about proxy tunneling, SOCKS proxies, authentication, etc.

Chunked download large files

We’ve already shown how you can stop and resume file transfers, but what if we wanted cURL to only download a chunk of a file? That way, we could download a large file in multiple chunks.

It’s possible to download only certain portions of a file, in case you needed to stay under a download cap or something like that. The –range flag is used to accomplish this.

curl range man

Sizes must be written in bytes. So if we wanted to download the latest Ubuntu .iso file in 100 MB chunks, our first command would look like this:

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

The second command would need to pick up at the next byte and download another 100 MB chunk.

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

curl --range 100000000-199999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part2

Repeat this process until all the chunks are downloaded. The last step is to combine the chunks into a single file, which can be done with the cat command.

cat ubuntu-part? > ubuntu-18.04.3-desktop-amd64.iso

Client certificate

To access a server using certificate authentication instead of basic authentication, you can specify a certificate file with the –cert option.

curl --cert path/to/cert.crt:password ftp://example.com

cURL has a lot of options for the format of certificate files.

curl cert

There are more certificate related options, too: –cacert, –cert-status, –cert-type, etc. Check out the man page for a full list of options.

Silent cURL

If you’d like to suppress cURL’s progress meter and error messages, the -s switch provides that feature. It will still output the data you request, so if you’d like the command to be 100% silent, you’d need to direct the output to a file.

Combine this command with the -O flag to save the file in your present working directory. This will ensure that cURL returns with 0 output.

curl -s -O http://example.com

Alternatively, you could use the –output option to choose where to save the file and specify a name.

curl -s http://example.com --output index.html

curl silent

Get headers

Grabbing the header of a remote address is very simple with cURL, you just need to use the -I option.

curl -I example.com

curl headers

If you combine this with the –L option, cURL will return the headers of every address that it’s redirected to.

curl -I -L example.com

Multiple headers

You can pass headers to cURL with the -H option. And to pass multiple headers, you just need to use the -H option multiple times. Here’s an example:

curl -H 'Connection: keep-alive' -H 'Accept-Charset: utf-8 ' http://example.com

Post (upload) file

POST is a common way for websites to accept data. For example, when you fill out a form online, there’s a good chance that the data is being sent from your browser using the POST method. To send data to a website in this way, use the -d option.

curl -d 'name=geek&location=usa' http://example.com

To upload a file, rather than text, the syntax would look like this:

curl -d @filename http://example.com

Use as many -d flags as you need in order to specify all the different data or filenames that you are trying to upload.

You can the -T option if you want to upload a file to an FTP server.

curl -T myfile.txt ftp://example.com/some/directory/

Send an email

Sending an email is simply uploading data from your computer (or another device) to an email server. Since cURL is able to upload data, we can use it to send emails. There are a slew of options, but here’s an example of how to send an email through an SMTP server:

curl smtp://mail.example.com --mail-from me@example.com --mail-rcpt john@domain.com --upload-file email.txt

Your email file would need to be formatted correctly. Something like this:

cat email.txt

From: Web Administrator <me@example.com>

To: John Doe <john@domain.com>

Subject: An example email

Date: Sat, 7 Dec 2019 02:10:15

John,

Hope you have a great weekend.

-Admin

As usual, more granular and specialized options can be found in the man page of cURL.

Read email message

cURL supports IMAP (and IMAPS) and POP3, both of which can be used to retrieve email messages from a mail server.

Login using IMAP like this:

curl -u username:password imap://mail.example.com

This command will list available mailboxes, but not view any specific message. To do this, specify the UID of the message with the –X option.

curl -u username:password imap://mail.example.com -X 'UID FETCH 1234'

Difference between cURL and wget

Sometimes people confuse cURL and wget because they’re both capable of retrieving data from a server. But this is the only thing they have in common.

We’ve shown in this article what cURL is capable of. wget provides a different set of functions. wget is the best tool for downloading websites and is capable of recursively traversing directories and links to download entire sites.

For downloading websites, use wget. If using some protocol other than HTTP or HTTPS, or for uploading files, use cURL. cURL is also a good option for downloading individual files from the web, although wget does that fine, too.

I hope you find the tutorial useful. Keep coming back.

Important Facts Everyone Needs to Know About Blockchain technology

If you were to ask the general population what they know about blockchain technology, you wouldn’t be surprised to hear that most of them either know nothing at all or can connect the blockchain to cryptocurrencies. They wouldn’t be wrong. Cryptocurrency is, in fact, dependent upon blockchain technology and it is the technology that has paved the way for bitcoin to become possible. Without it, the world’s most famous and valuable crypto wouldn’t exist. This is because when someone makes a payment with bitcoin, the payment is authenticated as another block of information on the chain. The blockchain takes the place of a bank to keep a record of payments, but unlike a bank, there is no central authority. The decentralised nature of bitcoin, therefore, hinges on this blockchain acting as a public ledger available to all but completely secure.

Looking Further than Bitcoin and Crypto

Yet, this is not the only use for blockchain technology. Despite bitcoin relying on its blockchain, it doesn’t work in the other direction.

Blockchains are used for other purposes in other industries. Here are some examples of industries that have already tapped into the blockchain potential:

#1: The Music Industry

One issue constantly facing musicians and those involved with creating music is that they do not receive the money they are owed.

It is not unheard of that megastars are seeking compensation from other music organisations for not paying them the royalties they deserve. Copyright infringements are rife and the court cases to address these problems are just as common.

The blockchain can counter this issue by providing a traceable and publicly available set of information for each song and who is owed what royalties from it.

The same idea can be applied to other forms of art such as photography. Photographers can trace the use of their images on the blockchain and even allow experts to track the origin of a piece of art.

#2: The Automotive Industry

One issue when buying a car is that you can never be certain that what you are buying is exactly how it was advertised or sold to you.

People can tamper with the mileage on a car and get around telling you about its maintenance history. What you think is a vehicle with an excellent track record could have been used a lot more and have been in the garage frequently.

This is why some businesses in the automotive industry have adopted blockchains and are using them on some vehicles to record maintenance and mileage. This is to prevent odometer fraud and vehicles being inaccurately sold by criminals.

#3: The Sports Industry

Some sports teams are using blockchain to create their own tokens for fans to use to buy match tickets and merchandise.

This is a way of creating a currency that is valuable to a select community. The blockchain is also being used by teams to implement fair voting systems to do with player jerseys and alike.

Using blockchains to cast votes is also a topic being considered by governments to ensure secure election processes without the need for recounts.

#4: The Freight Industry

The freight industry is welcoming blockchains to streamline often complex processes and reduce the amount of paperwork required en route.

It would enable businesses to track packages across a destination as they are scanned by different workers. It was also rumored to be a solution to the backstop issue within the Brexit negotiations.

From these four examples, it is easy to spot blockchains that have more purpose that what we most associate them with. In fact, it could be argued that the hype of owning a Luno Bitcoin wallet and sending secure payments around the world faster and cheaper may be making the general population blind to the other possibilities at hand.

The truth is, understanding the facts around blockchains will help us look beyond cryptocurrencies. Here are some of the key facts you may not know about blockchains already.

Blockchain Also Has a Place in Science

Thanks to grants and our natural thirst for knowledge, the scientific community has been able to amass a wealth of studies that help improve policies and inform public services.

However, scientists often come stuck when they try to replicate studies to authenticate results further, or tweak studies to find out more (and further our knowledge).

This happens because the original study’s data is not publicly available or easy to access. The blockchain could help in this matter by being the place where data is stored for scientific study.

Researchers across the globe could access a public ledger of data to conduct studies that other research has been based on, allowing future results to accurately verify information or increase our understanding.

Consider how many times two different researching teams have conflicting views about the same subject. The conflict may arise due to a difference in the quality or amount of data.

Blockchain technology holding the same data set would allow all research parties to research from the same information. Although this would help scientific groups to collaborate and progress with findings, it does also call for high-quality data to be used.

Blockchain as the Answer to ID Verification

Verifying our identity has become part and parcel of modern life. It is not just airports where we have to dig out our passport, but also gyms, libraries and any other time we sign up for a membership or service.

This can be time-consuming and inconvenient, especially when each vendor wants a different type of ID or a different combination of documentation.

Although blockchain has yet to be used in this way. There is potential for blockchain to be a solution and give every citizen of a country – or a group of countries that opt into the strategy – to record personal information and their identity on the blockchain.

This would make ID verification seamless in certain locations.

EU citizens already have something similar to this with their information stored on a chip placed on their ID card. An upgraded version of this on the blockchain could be the answer, with healthcare professionals having access to this in the event of an emergency.

Soon You May Be Buying Blockchain-Based Products

The idea that blockchain technologies will be most used by businesses is not true. Yes, many businesses will adopt the technology, but the technology will also be placed in the hands of the consumer.

This is because products are also going to be made with blockchain technology powering them – and it is already happening.

Some smartphone developers have already made blockchain smartphones. Other products that are in lien to be developed include devices around the home that recolonise the way we live.

What we are referring to is devices classed within the Internet of Things (IoT)

These devices will be connected and change how we do tasks and chores at home. They will also be connected to do so, such as telling a small device in the corner that you want to watch Netflix or to turn the dishwasher on.

The problem when lots of devices are connected is that they make you more vulnerable to hackers.

Blockchains can prevent hacks and protect your data by securing your at-home network on IoT devices. Methods of combining the two are already been worked on to keep consumers safe and their data protected.

Other Facts You Should Know About Blockchain Technology

The potential for blockchain has now been well established, but what has it already achieved? Here are some shorter facts about the technology that not a lot of people realize:

  1. The person(s) who made blockchains famous and bitcoin inventor, Satoshi Nakamoto, is unknown. People have suggested the person behind the revolution to be certain individuals, but the actual identity of the person responsible remains a mystery.
  2. Blockchains do not have to be public. They can also be private, somewhat like an intranet within a business. This is what enables them to function as a source of ID without compromising on data privacy laws.
  3. It is estimated that blockchain development is at the stage the internet was at around two decades ago. Considering this and what it has achieved so far perfectly illustrates the potential blockchain technology encompasses.
  4. Blockchains are relatively untouched. Around half of the world’s population use the internet and around 0.05% of us are using blockchains. This number will rise when more businesses adopt the technology.
  5. Conventional banks are now seeking blockchains to help with their own processes. What was once a tool against fiat financial systems is now being used within them. This may make some crypto enthusiasts weep a little.
  6. A Blockchain is at its most secure stage when it is first created. Many people assume that the blockchain will become more secure in time, but this is not the case.

It Doesn’t Mean We Should Forget about Cryptos

Just because the success of blockchain technology is not tied to cryptocurrency doesn’t mean we should forget about them. Cryptocurrencies, as well as digital tokens, ICOs and smart contracts,  are all the biggest successes of blockchain to date.

The benefits of cryptocurrency are huge, with faster, cheaper and more convenient payments becoming available worldwide.

This has a significantly positive impact on unbanked populations who do not have access to a bank account. For people sending money home to underdeveloped countries, they can send more money without incurring fees or time delays.

These glimpses into cryptocurrency’s power to dustups the financial status-quo should not be forgotten as other developments occur.

What Will Blockchains Do to the Job Market?

Technology and the internet, in particular, has had a significant impact on the job market in the developed world. Many jobs were replaced with machines that could do the work just as efficiently and many of these jobs were taken up by the working classes.

The same could happen once blockchain technology reaches its golden period. Many jobs may be displaced due to businesses utilising blockchains.

For example, earlier it was discussed that freight companies may use blockchains to streamline shipping processes. There is a strong chance that this development could put some workers out of a job.

Jobs may be lost due to blockchain, and they may be lost more in manual professions. However, the blockchain may also create lots of new jobs that are not around today. Most of these jobs will be directed at tech-savvy types and us geeks.

So, Should You Invest in Blockchain Startups?

There are so many positive noises coming from industries and businesses that are using blockchains. Yet, it is crucial to realise that this trend is new.

No doubt there are investment opportunities to be secured with blockchain B2B businesses, but are the right investments with blockchain startups?

The answer may be yes, but it may be smarter to invest your money in established technology companies who already own a strong market share.

Blockchains can be made for everyone and choosing a small startup may not guarantee you success. Placing your investment with companies who are actively looking at blockchains and already have a foothold in their market could be the wiser move.

The Takeaway Fact to Remember

There is a chance that you learned a lot about blockchains in this post, but you are not likely to retain everything you learned. If you need one fact about blockchain technology to leave with – and the most important one. It is that blockchains cannot be ignored.

They are a key player in the fourth industrial revolution and in that sense, they are exceptionally disruptive to all current technology.

Consider blockchains to be the puppet masters of the future of the tech and many other industries. It may just take a little while for the curtain to be pulled back completely.

SSH port forwarding (tunneling) in Linux

In this tutorial, we will cover SSH port forwarding in Linux. This is a function of the SSH utility that Linux administrators use to create encrypted and secure relays across different systems. SSH port forwarding, also called SSH tunneling, is used to create a secure connection between two or more systems. Applications can then use these tunnels to transmit data. Your data is only as secure as its encryption, which is why SSH port forwarding is a popular mechanism to use. Read on to find out more and see how to setup SSH port forwarding on your own systems. To put it simply, SSH port forwarding involves establishing an SSH tunnel between two or more systems and then configuring the systems to transmit a specified type of traffic through that connection.

What is SSH port forwarding?

To put it simply, SSH port forwarding involves establishing an SSH tunnel between two or more systems and then configuring the systems to transmit a specified type of traffic through that connection.

There are a few different things you can do with this: local forwarding, remote forwarding, and dynamic port forwarding. Each configuration requires its own steps to setup, so we will go over each of them later in the tutorial.

Local port forwarding is used to make an external resource available on the local network. An SSH tunnel is established to a remote system, and traffic from the local network can use that tunnel to transmit data back and forth, accessing the remote system and network as if it was a part of the local network.

Remote port forwarding is the exact opposite. An SSH tunnel is established but the remote system is able to access your local network.

Dynamic port forwarding sets up a SOCKS proxy server. You can configure applications to connect to the proxy and transmit all data through it. The most common use for this is for private web browsing or to make your connection seemingly originate from a different country or location.

SSH port forwarding can also be used to setup a virtual private network (VPN). You’ll need an extra program for this called sshuttle. We cover the details later in the tutorial.

Why use SSH port forwarding?

Since SSH creates encrypted connections, this is an ideal solution if you have applications that transmit data in plaintext or use an unencrypted protocol. This holds especially true for legacy applications.

It’s also popular to use it for connecting to a local network from the outside. For example, an employee using SSH tunnels to connect to a company’s intranet.

You may be thinking this sounds like a VPN. The two are similar, but creating ssh tunnels is for specific traffic, whereas VPNs are more for establishing general connections.

SSH port forwarding will allow you to access remote resources by just establishing an SSH tunnel. The only requirement is that you have SSH access to the remote system and, ideally, public key authentication configured for password-less SSHing.

How many sessions are possible?

Technically, you can specify as many port forwarding sessions as you’d like. Networks use 65,535 different ports, and you are able to forward any of them that you want.

When forwarding traffic, be cognizant of the services that use certain ports. For example, port 80 is reserved for HTTP. So you would only want to forward traffic on port 80 if you intend to forward web requests.

The port you forward on your local system doesn’t have to match that of the remote server. For example, you can forward port 8080 on localhost to port 80 on the remote host.

If you don’t care what port you are using on the local system, select one between 2,000 and 10,000 since these are rarely used ports. Smaller numbers are typically reserved for certain protocols.

Local forwarding

Local forwarding involves forwarding a port from the client system to a server. It allows you to configure a port on your system so that all connections to that port will get forwarded through the SSH tunnel.

Use the -L switch in your ssh command to specify local port forwarding. The general syntax of the command is like this:

ssh -L local_port:remote_ip:remote_port user@hostname.com

Check out the example below:

ssh -L 80:example1.com:80 example2.com

local port forwarding

This command would forward all requests to example1.com to example2.com. Any user on this system that opens a web browser and attempts to navigate to example1.com will, in the background, have their request sent to example2.com instead and display a different website.

Such a command is useful when configuring external access to a company intranet or other private network resources.

Test SSH port forwarding

To see if your port forwarding is working correctly, you can use the netcat command. On the client machine (the system where you ran the ssh -L command), type the netcat command with this syntax:

nc -v remote_ip port_number

Test port forwarding using netcat

If the port is forwarded and data is able to traverse the connection successfully, netcat will return with a success message. If it doesn’t work, the connection will time out.

If you’re having trouble getting the port forwarding to work, make sure you’re able to ssh into the remote server normally and that you have configured the ports correctly. Also, verify that the connection isn’t being blocked by a firewall.

Persistent SSH tunnels (Using Autossh)

Autossh is a tool that can be used to create persistent SSH tunnels. The only prerequisite is that you need to have public key authentication configured between your systems, unless you want to be prompted for a password every time the connection dies and is reestablished.

Autossh may not be installed by default on your system, but you can quickly install it using apt, yum, or whatever package manager your distribution uses.

sudo apt-get install autossh

The autossh command is going to look pretty much identical to the ssh command we ran earlier.

autossh -L 80:example1.com:80 example2.com

Persistent SSH port forwarding autossh

Autossh will make sure that tunnels are automatically re-established in case they close because of inactivity, remote machine rebooting, network connection being lost, etc.

Remote forwarding

Remote port forwarding is used to give a remote machine access to your system. For example, if you want a service on your local computer to be accessible by a system(s) on your company’s private network, you could configure remote port forwarding to accomplish that.

To set this up, issue an ssh command with the following syntax:

ssh -R remote_port:local_ip:local_port user@hostname.com

If you have a local web server on your computer and would like to grant access to it from a remote network, you could forward port 8080 (common http alternative port) on the remote system to port 80 (http port) on your local system.

ssh -R 8080:localhost:80 geek@likegeeks.com

Remote port forwarding

Dynamic forwarding

SSH dynamic port forwarding will make SSH act as a SOCKS proxy server. Rather than forwarding traffic on a specific port (the way local and remote port forwarding do), this will forward traffic across a range of ports.

If you have ever used a proxy server to visit a blocked website or view location-restricted content (like viewing stuff on Netflix that isn’t available in your country), you probably used a SOCKS server.

It also provides privacy, since you can route your traffic through a SOCKS server with dynamic port forwarding and prevent anyone from snooping log files to see your network traffic (websites visited, etc).

To set up dynamic port forwarding, use the ssh command with the following syntax:

ssh -D local_port user@hostname.com

So, if we wanted to forward traffic on port 1234 to our SSH server:

ssh -D 1234 geek@likegeeks.com

Once you’ve established this connection, you can configure applications to route traffic through it. For example, on your web browser:

Socks proxy

Type the loopback address (127.0.0.1) and the port you configured for dynamic port forwarding, and all traffic will be forwarded through the SSH tunnel to the remote host (in our example, the likegeeks.com SSH server).

Multiple forwarding

For local port forwarding, if you’d like to setup more than one port to be forwarded to a remote host, you just need to specify each rule with a new -L switch each time. The command syntax is like this:

ssh -L local_port_1:remote_ip:remote_port_1 -L local_port_2:remote_ip:remote_port2 user@hostname.com

For example, if you want to forward ports 8080 and 4430 to 192.168.1.1 ports 80 and 443 (HTTP and HTTPS), respectively, you would use this command:

ssh -L 8080:192.168.1.1:80 -L 4430:192.168.1.1:443 user@hostname.com

For remote port forwarding, you can setup more than one port to be forwarded by specifying each new rule with the -R switch. The command syntax is like this:

ssh -R remote_port1:local_ip:local_port1 remote_port2:local_ip:local_port2 user@hostname.com

List port forwarding

You can see what SSH tunnels are currently established with the lsof command.

lsof -i | egrep '\<ssh\>'

SSH tunnels

In this screenshot, you can see that there are 3 SSH tunnels established. Add the -n flag to have IP addresses listed instead of resolving the hostnames.

lsof -i -n | egrep '\<ssh\>'

SSH tunnels n flag

Limit forwarding

By default, SSH port forwarding is pretty open. You can freely create local, remote, and dynamic port forwards as you please.

But if you don’t trust some of the SSH users on your system, or you’d just like to enhance security in general, you can put some limitations on SSH port forwarding.

There are a couple of different settings you can configure inside the sshd_config file to put limitations on port forwarding. To configure this file, edit it with vi, nano, or your favorite text editor:

sudo vi /etc/ssh/sshd_config

PermitOpen can be used to specify the destinations to which port forwarding is allowed. If you only want to allow forwarding to certain IP addresses or hostnames, use this directive. The syntax is as follows:

PermitOpen host:port

PermitOpen IPv4_addr:port

PermitOpen [IPv6_addr]:port

AllowTCPForwarding can be used to turn SSH port forwarding on or off, or specify what type of SSH port forwarding is permitted. Possible configurations are:

AllowTCPForwarding yes #default setting

AllowTCPForwarding no #prevent all SSH port forwarding

AllowTCPForwarding local #allow only local SSH port forwarding

AllowTCPForwarding remote #allow only remote SSH port forwarding

To see more information about these options, you can check out the man page:

man sshd_config

Low latency

The only real problem that arises with SSH port forwarding is that there is usually a bit of latency. You probably won’t notice this as an issue if you’re doing something minor, like accessing text files or small databases.

The problem becomes more apparent when doing network intensive activities, especially if you have port forwarding set up as a SOCKS proxy server.

The reason for the latency is because SSH is tunneling TCP over TCP. This is a terribly inefficient way to transfer data and will result in slower network speeds.

You could use a VPN to prevent the issue, but if you are determined to stick with SSH tunnels, there is a program called sshuttle that corrects the issue. Ubuntu and Debian-based distributions can install it with apt-get:

sudo apt-get install sshuttle

If you package manager on your distribution doesn’t have sshuttle in its repository, you can clone it from GitHub:

git clone https://github.com/sshuttle/sshuttle.git

cd sshuttle

./setup.py install

Setting up a tunnel with sshuttle is different from the normal ssh command. To setup a tunnel that forwards all traffic (akin to a VPN):

sudo sshuttle -r user@remote_ip -x remote_ip 0/0 -vv

sshuttle command

Break the connection with a ctrl+c key combination in the terminal. Alternatively, to run the sshuttle command as a daemon, add the -D switch to your command.

Want to make sure that the connection was established and the internet sees you at the new IP address? You can run this curl command:

curl ipinfo.io

curl IP address

I hope you find the tutorial useful. Keep coming back.

December 27, 2019

Mustafa Akgül Özgür Yazılım Kış Kampı [2020]

Linux Kullanıcıları Derneği ve Eskişehir Anadolu Üniversitesi’nin organizasyonunu üstlendiği etkinlik bu yıl 25 – 28 Ocak tarihleri arasında Eskişehir Anadolu Üniversitesi Yunus Emre Yerleşkesi’nde İktisadi İdari Bilimler Fakültesi'nde yapılıyor. Farklı alanlarda, farklı bilgi düzeylerine hitap eden paralel sınıflarda gerçekleştirilen eğitimlere katılım ücretsizdir. Katılımcılardan

December 26, 2019

İlledelinux Debian System

Adını İlledelinux Debian System olarak belirlediğim farklı bir çalışma paylaşacağım. Debian Buster tabanlı inşa ettiğim bu çalışma "Build your own system" yani kendi sistemini kendin yap anlayışına dayanıyor. Ancak kendi sistemini yaparken de yeni veya deneyimli bütün kullanıcılara kolaylık hedeflendi. Bu çalışmada bütün seçenekler tamamen kullanıcıya bırakıldı. İster paket seçimi isterse

December 25, 2019

Openbox başlangıçta otomatik komut çalıştır

Openbox oturumunda başlangıçta çalıştırmak istediğiniz komut, script, program veya herhangi bir öğe için resimde arayüzü görülen ve bu işi otomatik yapan bir program uyarladım. Komutu girip Ok tuşuna tıklıyorsunuz hepsi bu ve başlangıçta girdiğiniz komut otomatik çalışıyor. Openbox kullanımını kolaylaştıran bu işlemi kullanmak isterseniz sisteminize entegre etmek için işleme başlayalım. Önce

December 24, 2019

NETCAT – NC kullanarak veri transferi sağlamak

Netcat ile iki bilgisayarlar arası veri transferi nasıl yapılır ona bakacağız. Bu işlemi makineler arasında port açarak gerçekleştireceğiz. Alıcı bilgisayar(receive)’da aşağıdaki komut çalıştırılarak port(3434) aktif edilip dosyam.txt için tünel açılır. nc -l -p 3434 > dosyam.txt Gönderici(sender) makinede’de aşağıdaki komut çalıştırılarak dosyam.txt aktarılır. nc 192.168.1.12 3434 < dosyam.txt

21-22 Aralık 2019 – Ücretsiz Open Source ve Linux Administrator Eğitim Etkinliği Hakkında

Sosyal sorumluluk çerçevesinde üstlendiğim projem ve aynı zamanda Opensource dünyası ile beraber Linux sistemlerin öneminin, gerekliliğinin, farkındalığının oluşturulması, sektörde çalışan ya da öğrenci genç arkadaşlara yön vermek, farkındalık oluşturmanın yanısıra bilgi ve becerileri arttırmak amacıyla ücretsiz olarak Linux Administrator eğitimini 21-22 Aralık 2019 tarihlerinde gerçekleştirdim. Eğitimi RedHat ve LPI içeriklerinden derleyerek, RedHat/CentOS ve Ubuntu/Debian dağıtımlarını...

Continue Reading

December 23, 2019

15+ examples for Linux cURL command

In this tutorial, we will cover the cURL command in Linux. Follow along as we guide you through the functions of this powerful utility with examples to help you understand everything it’s capable of. The cURL command is used to download or upload data to a server, using one of its 20+ supported protocols. This data could be a file, email message, or web page. What is cURL command? cURL is an ideal tool for interacting with a website or API, sending requests and displaying the responses to the terminal or logging the data to a file. Sometimes it’s used as part of a larger script, handing off the retrieved data to other functions for processing. Since cURL can be used to retrieve files from servers, it’s often used to download part of a website. It performs this function well, but sometimes the wget command is better suited for that job. We’ll go over some of the differences and similarities between wget and cURL later in this article. We’ll show you how to get started using cURL in the sections below.

Download a file

The most basic command we can give to cURL is to download a website or file. cURL will use HTTP as its default protocol unless we specify a different one. To download a website, just issue this command:

curl http://www.google.com

Of course, enter any website or page that you want to retrieve.

curl basic command

Doing a basic command like this with no extra options will rarely be useful, because this only tells cURL to retrieve the source code of the page you’ve provided.

curl output

When we ran our command, our terminal is filled with HTML and other web scripting code – not something that is particularly useful to us in this form.

Let’s download the website as an HTML document instead, that way the content can be displayed. Add the –output option to cURL to achieve this.
curl output switch

Now the website we downloaded can be opened and displayed in a web browser.

downloaded website

If you’d like to download an online file, the command is about the same. But make sure to append the –output option to cURL as we did in the example above.

If you fail to do so, cURL will send the binary output of the online file to your terminal, which will likely cause it to malfunction.

Here’s what it looks like when we initiate the download of a 500KB word document.

curl download document

The word document begins to download and the current progress of the download is shown in the terminal. When the download completes, the file will be available in the directory we saved it to.

In this example, no directory was specified, so it was saved to our present working directory (the directory from which we ran the cURL command).

Also, did you notice the -L option that we specified in our cURL command? It was necessary in order to download this file, and we go over its function in the next section.

Follow redirect

If you get an empty output when trying to cURL a website, it probably means that the website told cURL to redirect to a different URL. By default, cURL won’t follow the redirect, but you can tell it to with the -L switch.

curl -L www.likegeeks.com

curl follow redirect

In our research for this article, we found it was necessary to specify the -L on a majority of websites, so be sure to remember this little trick. You may even want to append it to the majority of your cURL commands by default.

Stop and resume download

If your download gets interrupted, or if you need to download a big file but don’t want to do it all in one session, cURL provides an option to stop and resume the transfer.

To stop a transfer manually, you can just end the cURL process the same way you’d stop almost any process currently running in your terminal, with a ctrl+c combination.

curl stop download

Our download has begun, but was interrupted with ctrl+c, now let’s resume it with the following syntax:

curl -C - example.com/some-file.zip --output MyFile.zip

The -C switch is what resumes our file transfer, but also notice that there is a dash (-) directly after it. This tells cURL to resume the file transfer, but to first look at the already downloaded portion in order to see the last byte downloaded and determine where to resume.

resume file download

Our file transfer was resumed and then proceeded to finish downloading successfully.

Specify timeout

If you want cURL to abandon what it’s doing after a certain amount of time, you can specify a timeout in the command. This is especially useful because some operations in cURL don’t have a timeout by default, so one needs to be specified if you don’t want it getting hung up indefinitely.

You can specify a maximum time to spend executing a command with the -m switch. When the specified time has elapsed, cURL will exit whatever it’s doing, even if it’s in the middle of downloading or uploading a file.

cURL expects your maximum time to be specified in seconds. So, to timeout after one minute, the command would look like this:

curl -m 60 example.com

Another type of timeout that you can specify with cURL is the amount of time to spend connecting. This helps make sure that cURL doesn’t spend an unreasonable amount of time attempting to contact a host that is offline or otherwise unreachable.

It, too, accepts seconds as an argument. The option is written as –connect-timeout.

curl --connect-timeout 60 example.com

Using a username and a password

You can specify a username and password in a cURL command with the -u switch. For example, if you wanted to authenticate with an FTP server, the syntax would look like this:

curl -u username:password ftp://example.com

curl authenticate

You can use this with any protocol, but FTP is frequently used for simple file transfers like this.

If we wanted to download the file displayed in the screenshot above, we just issue the same command but use the full path to the file.

curl -u username:password ftp://example.com/readme.txt

curl authenticate download

Use proxies

It’s easy to direct cURL to use a proxy before connecting to a host. cURL will expect an HTTP proxy by default, unless you specify otherwise.

Use the -x switch to define a proxy. Since no protocol is specified in this example, cURL will assume it’s an HTTP proxy.

curl -x 192.168.1.1:8080 http://example.com

This command would use 192.168.1.1 on port 8080 as a proxy to connect to example.com.

You can use it with other protocols as well. Here’s an example of what it’d look like to use an HTTP proxy to cURL to an FTP server and retrieve a file.

curl -x 192.168.1.1:8080 ftp://example.com/readme.txt

cURL supports many other types of proxies and options to use with those proxies, but expanding further would be beyond the scope of this guide. Check out the cURL man page for more information about proxy tunneling, SOCKS proxies, authentication, etc.

Chunked download large files

We’ve already shown how you can stop and resume file transfers, but what if we wanted cURL to only download a chunk of a file? That way, we could download a large file in multiple chunks.

It’s possible to download only certain portions of a file, in case you needed to stay under a download cap or something like that. The –range flag is used to accomplish this.

curl range man

Sizes must be written in bytes. So if we wanted to download the latest Ubuntu .iso file in 100 MB chunks, our first command would look like this:

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

The second command would need to pick up at the next byte and download another 100 MB chunk.

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

curl --range 100000000-199999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part2

Repeat this process until all the chunks are downloaded. The last step is to combine the chunks into a single file, which can be done with the cat command.

cat ubuntu-part? > ubuntu-18.04.3-desktop-amd64.iso

Client certificate

To access a server using certificate authentication instead of basic authentication, you can specify a certificate file with the –cert option.

curl --cert path/to/cert.crt:password ftp://example.com

cURL has a lot of options for the format of certificate files.

curl cert

There are more certificate related options, too: –cacert, –cert-status, –cert-type, etc. Check out the man page for a full list of options.

Silent cURL

If you’d like to suppress cURL’s progress meter and error messages, the -s switch provides that feature. It will still output the data you request, so if you’d like the command to be 100% silent, you’d need to direct the output to a file.

Combine this command with the -O flag to save the file in your present working directory. This will ensure that cURL returns with 0 output.

curl -s -O http://example.com

Alternatively, you could use the –output option to choose where to save the file and specify a name.

curl -s http://example.com --output index.html

curl silent

Get headers

Grabbing the header of a remote address is very simple with cURL, you just need to use the -I option.

curl -I example.com

curl headers

If you combine this with the –L option, cURL will return the headers of every address that it’s redirected to.

curl -I -L example.com

Multiple headers

You can pass headers to cURL with the -H option. And to pass multiple headers, you just need to use the -H option multiple times. Here’s an example:

curl -H 'Connection: keep-alive' -H 'Accept-Charset: utf-8 ' http://example.com

Post (upload) file

POST is a common way for websites to accept data. For example, when you fill out a form online, there’s a good chance that the data is being sent from your browser using the POST method. To send data to a website in this way, use the -d option.

curl -d 'name=geek&location=usa' http://example.com

To upload a file, rather than text, the syntax would look like this:

curl -d @filename http://example.com

Use as many -d flags as you need in order to specify all the different data or filenames that you are trying to upload.

You can the -T option if you want to upload a file to an FTP server.

curl -T myfile.txt ftp://example.com/some/directory/

Send an email

Sending an email is simply uploading data from your computer (or another device) to an email server. Since cURL is able to upload data, we can use it to send emails. There are a slew of options, but here’s an example of how to send an email through an SMTP server:

curl smtp://mail.example.com --mail-from me@example.com --mail-rcpt john@domain.com --upload-file email.txt

Your email file would need to be formatted correctly. Something like this:

As usual, more granular and specialized options can be found in the man page of cURL.

Read email message

cURL supports IMAP (and IMAPS) and POP3, both of which can be used to retrieve email messages from a mail server.

Login using IMAP like this:

curl -u username:password imap://mail.example.com

This command will list available mailboxes, but not view any specific message. To do this, specify the UID of the message with the –X option.

curl -u username:password imap://mail.example.com -X 'UID FETCH 1234'

Difference between cURL and wget

Sometimes people confuse cURL and wget because they’re both capable of retrieving data from a server. But this is the only thing they have in common.

We’ve shown in this article what cURL is capable of. wget provides a different set of functions. wget is the best tool for downloading websites and is capable of recursively traversing directories and links to download entire sites.

For downloading websites, use wget. If using some protocol other than HTTP or HTTPS, or for uploading files, use cURL. cURL is also a good option for downloading individual files from the web, although wget does that fine, too.

I hope you find the tutorial useful. Keep coming back.

Grep command in Linux (With Examples)

In this tutorial, you will learn how to use the very essential grep command in Linux. We’re going to go over why this command is important to master, and how you can utilize it in your everyday tasks at the command line. Let’s dive right in with some explanations and examples. Why do we use grep? Grep is a command line tool that Linux users use to search for strings of text. You can use it to search a file for a certain word or combination of words or you can pipe the output of other Linux commands to grep, so grep can show you only the output that you need to see. Let’s look at some really common examples. Say that you need to check the contents of a directory to see if a certain file exists there. That’s something you would use the “ls” command for. But, to make this whole process of checking the directory’s contents even faster, you can pipe the output of the ls command to the grep command. Let’s look in our home directory for a folder called Documents.

ls without grep

And now let’s try checking the directory again, but this time using grep to check specifically for the Documents folder.

ls | grep Documents

ls grep

As you can see in the screenshot above, using the grep command saved us time by quickly isolating the word we searched for from the rest of the unnecessary output that the ls command produced.

If the Documents folder didn’t exist, grep wouldn’t return any output. So if nothing is returned by grep, that means that it couldn’t find the word you are searching for.

grep no results

Find a string

If you need to search for a string of text, rather than just a single word, you will need to wrap the string in quotes. For example, what if we needed to search for the “My Documents” directory instead of the single-worded “Documents” directory?

ls | grep 'My Documents'

grep for string

Grep will accept both single quotes and double quotes, so wrap your string of text with either.

While grep is often used to search the output piped from other command line tools, you can also use it to search documents directly. Here’s an example where we search a text document for a string.

grep 'Class 1' Students.txt

grep for string in document

Find multiple strings

You can also use grep to find multiple words or strings. You can specify multiple patterns by using the -e switch. Let’s try searching a text document for two different strings:

grep -e 'Class 1' -e Todd Students.txt

grep multiple strings

Notice that we only needed to use quotes around the strings that contained spaces.

Difference between grep, egrep fgrep, pgrep, zgrep

Various grep switches were historically included in different binaries. On modern Linux systems, you will find these switches available in the base grep command, but it’s common to see distributions support the other commands as well.

From the man page for grep:

grep commands

egrep is the equivalent of grep -E

This switch will interpret a pattern as an extended regular expression. There’s a ton of different things you can do with this, but here’s an example of what it looks like to use a regular expression with grep.

Let’s search a text document for strings that contain two consecutive ‘p’ letters:

egrep p\{2} fruits.txt
or
grep -E p\{2} fruits.txt

egrep example

fgrep is the equivalent of grep -F

This switch will interpret a pattern as a list of fixed strings, and try to match any of them. It’s useful when you need to search for regular expression characters. This means you don’t have to escape special characters like you would with regular grep.

fgrep example

pgrep is a command to search for the name of a running process on your system and return its respective process IDs. For example, you could use it to find the process ID of the SSH daemon:

pgrep sshd

fgrep example

This is similar in function to just piping the output of the ‘ps’ command to grep.

prgrep vs ps

You could use this information to kill a running process or troubleshoot issues with the services running on your system.

zgrep is used to search compressed files for a pattern. It allows you to search the files inside of a compressed archive without having to first decompress that archive, basically saving you an extra step or two.

zgrep apple fruits.txt.gz

zgrep example

zgrep also works on tar files, but only seems to go as far as telling you whether or not it was able to find a match.

zgrep tar file

We mention this because files compressed with gzip are very commonly tar archives.

Difference between find and grep

For those just starting out on the Linux command line, it’s important to remember that find and grep are two commands with two very different functions, even though they are both used to “find” something that the user specifies.

It’s handy to use grep to find a file when you use it to search through the output of the ls command, like we showed in the first examples of the tutorial.

However, if you need to search recursively for the name of a file – or part of the file name if you use a wildcard (asterisk) – you’re much ahead to use the ‘find’ command.

find /path/to/search -name name-of-file

find command

The output above shows that the find command was able to successfully locate the file we searched for.

Search recursively

You can use the -r switch with grep to search recursively through all files in a directory and its subdirectories for a specified pattern.

grep -r pattern /directory/to/search

If you don’t specify a directory, grep will just search your present working directory. In the screenshot below, grep found two files matching our pattern, and returns with their file names and which directory they reside in.

recursive grep

Catch space or tab

As we mentioned earlier in our explanation of how to search for string, you can wrap text inside quotes if it contains spaces. The same method will work for tabs, but we’ll explain how to put a tab in your grep command in a moment.

Put a space or multiple spaces inside quotes to have grep search for that character.

grep " " sample.txt

grep spaces

There are a few different ways you can search for a tab with grep, but most of the methods are experimental or can be inconsistent across different distributions.

The easiest way is to just search for the tab character itself, which you can produce by hitting ctrl+v on your keyboard, followed by tab.

Normally, pressing tab in a terminal window tells the terminal that you want to auto-complete a command, but pressing the ctrl+v combination beforehand will cause the tab character to be written out as you’d normally expect it to in a text editor.

grep " " sample.txt

grep tabs

Knowing this little trick is especially useful when greping through configuration files in Linux, since tabs are frequently used to separate commands from their values.

Using regular expressions

Grep’s functionality is further extended by using regular expressions, allowing you more flexibility in your searches. Several exist, and we will go over some of the most commons ones in the examples below:

[ ] brackets are used to match any of a set of characters.

grep "Class [123]" Students.txt

grep brackets

This command will return any lines that say ‘Class 1’, ‘Class2’, or ‘Class 3’.

[-] brackets with hyphen can be used to specify a range of characters, either numerical or alphabetical.

grep "Class [1-3]" Students.txt

grep brackets hyphen

We get the same output as before, but the command is much easier to type, especially if we had a bigger range of numbers or letters.

^ caret is used to search for a pattern that only occurs at the beginning of a line.

grep "^Class" Students.txt

grep caret

[^] brackets with caret are used to exclude characters from a search pattern.

grep "Class [^1-2]" Students.txt

grep brackets caret

$ dollar sign is used to search for a pattern that only occurs at the end of a line.

grep "1$" Students.txt

grep dollar

. dot is used to match any one character, so it’s a wildcard but only for a single character.

grep "A….a" Students.txt

grep dot

Grep gz files without unzipping

As we showed earlier, the zgrep command can be used to search through compressed files without having to unzip them first.

zgrep word-to-search /path/to/file.gz

You can also use the zcat command to display the contents of a gz file, and then pipe that output to grep to isolate the lines containing your search string.

zcat file.gz | grep word-to-search

zcat

Grep email addresses from a zip file

We can use a fancy regular expression to extract all the email addresses from a zip file.

grep -o '[[:alnum:]+\.\_\-]*@[[:alnum:]+\.\_\-]*' emails.txt

The -o flag will extract the email address only, rather than showing the entire line that contains the email address. This results in a cleaner output.

grep emails

As with most things in Linux, there is more than one way to do this. You could also use egrep and a different set of expressions. But the example above works just fine and is a pretty simple way to extract the email addresses and ignore everything else.

Grep IP addresses

Greping for IP addresses can get a little complex because we can’t just tell grep to look for 4 numbers separated by dots – well, we could, but that command has the potential to return invalid IP addresses as well.

The following command will find and isolate only valid IPv4 addresses:

grep -E -o "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)" /var/log/auth.log

We used this on our Ubuntu server just to see where the latest SSH attempts have been made from.

grep IP addresses

To avoid repeat information and having your screen flooded, you may want to pipe your grep commands to “uniq” and “more” as we did in the screenshot above.

Grep or condition

There are a few different ways you can use an or condition with grep, but we will show you the one that requires the least amount of keystrokes and is easiest to remember:

grep -E 'string1|string2' filename
or, technically using egrep is even less keystrokes:
egrep 'string1|string2' filename

grep or condition

Ignore case sensitivity

By default, grep is case sensitive, which means you have to be precise in the capitalization of your search string. You can avoid this by telling grep to ignore the case with the -i switch.

grep -i string filename

grep ignore case

Search with case sensitive

What if we want to search for a string where the first can be uppercase or lowercase, but the rest of the string should be lowercase? Ignoring case with the -i switch won’t work in this case, so a simple way to do it would be with brackets.

grep [Ss]tring filename

This command tells grep to be case sensitive except for the first letter.

grep case sensitive

Grep exact match

In our examples above, whenever we search our document for the string “apple”, grep also returns “pineapple” as part of the output. To avoid this, and search for strictly “apple”, you can use this command:

grep "\<apple\>" fruits.txt

exact match

You can also use the -w switch, which will tell grep that the string must match the whole line. Obviously, this will only work in situations where you’re not expecting the rest of the line to have any text at all.

Exclude pattern

To see the contents of a file but exclude patterns from the output, you can use the -v switch.

grep -v string-to-exclude filename

exclude pattern

As you can see in the screenshot, the string we excluded is no longer shown when we run the same command with the -v switch.

Grep and replace

A grep command piped to sed can be used to replace all instances of a string in a file. This command will replace “string1” with “string2” in all files relative to the present working directory:

grep -rl 'string1' ./ | xargs sed -i 's/string1/string2/g'

Grep with line number

To show the number of a line that your search string is found on, use the -n switch.

grep -n string filename

show line numbers

Show lines before and after

If you need a little more context to the grep output, you can show one line before and after your specified search string with the -c switch:

grep -c 1 string filename

Specify the number of lines you wish to show – we did only 1 line in this example.

line before and after

Sort the result

Pipe grep’s output to the sort command to sort your results in some kind of order. The default is alphabetical.

grep string filename | sort

line before and after

I hope you find the tutorial useful. Keep coming back.

Understanding Linux runlevels the right way

You can think of Linux runlevels as different “modes” that the operating system runs in. Each of these modes, or runlevels, has its own list of processes and services that are either turned on or off. From the time Linux boots up, it’s always in some runlevel. This runlevel may change as you continue to use your computer, depending on what kind of services the operating system needs access to. For example, running your Linux machine with a graphical user interface will necessitate a different runlevel than if you were only running the command line on your system. This is because the graphical user interface will need access to various services that the command line simply does not. In order for the system to determine which services are needed to be switched on (or off), it changes the runlevel as needed.

Importance of Linux runlevels

You may have used Linux for years without realizing that there are different runlevels. That’s because it’s not something most server administrators will need to configure often.

However, Linux runlevels do give administrators increased control over the systems they manage.

The runlevel a system is in can be changed (which we will see how to do later in the article), as can the services which run inside the runlevels. This allows us complete control over what services our systems have access to at any given time.

How many runlevels are in Linux?

There are seven different runlevels in Linux, numbered from zero to six. Various distributions may use the seven runlevels differently, so it’s not very easy to compile a definitive list of what the runlevels do.

Instead, you would need to check how the runlevels work on the specific distribution that you are using. For the most part, the list below represents how Linux distributions generally configure runlevels:

Runlevel 0 shuts down the system.

Runlevel 1 is a single-user mode, which is used for maintenance or administrative tasks. You may also see this mode referred to as runlevel S (the S stands for single-user).

Runlevel 2 is a multi-user mode. This runlevel does not use any networking services.

Runlevel 3 is a multi-user mode with networking. This is the normal runlevel you are used to if you use a system that doesn’t boot into a GUI (graphical user interface).

Runlevel 4 is not used. The user can customize this runlevel for their own purposes (which we will cover how to do later in the article).

Runlevel 5 is the same as runlevel 3, but it also starts a display manager. This is the runlevel you are used to if you use a system that boots into a GUI.

Runlevel 6 reboots the system.

What is my current runlevel?

You can see your current runlevel on most distributions by simply typing “runlevel” in the terminal.

Current runlevel

When you enter the “runlevel” command, it’ll give you two different numbers. The first number is the previous runlevel your system was running, and the second number is the current runlevel of your system.

In the screenshot above, the “N” is short for “none”, meaning that the system was not in any different runlevel previously. The “5” means our system is currently in runlevel 5.

We are running CentOS in this example, which booted directly into a graphical interface, hence why the system went straight to runlevel 5.

How to change the current runlevel?

You can change the current runlevel of your system using the “telinit” command. For example, to change to runlevel 3 on CentOS, you would type:

telinit 3

Change the current runlevel

Keep in mind that you must be the root user to execute this command. Be aware that runlevels work differently on Debian and Ubuntu – for example, Ubuntu will boot into runlevel 5 even without starting a GUI.

If you follow the example above, your screen may go blank. This is because you’re left at the – now empty – tty. Just do Alt+F1 (or some other function key) on your keyboard to be taken to a working terminal.

If we use the “runlevel” command again, we’ll see that we are now in runlevel 3, and the previous runlevel is listed as 5 since we just changed from it.

runlevel command

Linux systemd targets vs runlevels

In recent years, systemd has come to replace the long-standing “System V init” (runlevels) system. It still works in basically the same way, but uses some new commands and commonly refers to “runlevels” as “targets” instead.

Runlevel 0 = poweroff.target (runlevel0.target)

Runlevel 1 = rescue.target (runlevel1.target)

Runlevel 2 = multi-user.target (runlevel2.target)

Runlevel 3 = multi-user.target (runlevel3.target)

Runlevel 4 = multi-user.target (runlevel4.target)

Runlevel 5 = graphical.target (runlevel5.target)

Runlevel 6 = reboot.target (runlevel6.target)

We’ll continue going over systemd and the commands you’ll need to know as this tutorial progresses.

How to change the default runlevel at startup?

There are a lot of reasons why you may wish to boot into a different runlevel. For example, it’s common for system administrators to boot into the command line, and only start a graphical interface when deemed necessary.

For this functionality, you would want to make sure your default runlevel is set to 3, and not 5.

In the past, one was required to edit the /etc/inittab file to define the default runlevel at startup. You may still find this to be the case on some distributions.

If working with an operating system that has not been upgraded for a few years, you’ll still find this method to be relevant for you.

vi /etc/inittab

inittab file

In the screenshot above, runlevel 5 is currently set to the default runlevel for the startup.

As of 2016, most major Linux distributions have phased out the /etc/inittab file, in favor of systemd targets – we’ll cover the differences later in this article.

You may find that your system doesn’t have the /etc/inittab file at all, or your inittab file may advise you to use systemd instead like, in this screenshot from our CentOS system.

systemd

To check the current default target of your system:

systemctl get-default

Current default target

In the screenshot above, the reply back from the system is “graphical.target”. As you can probably guess, this is the equivalent to runlevel 5.

To see the other available targets, and the runlevels they are associated with, type:

ls -l /lib/systemd/system/runlevel*

Available target runlevels

These symbolic links tell us that the systemd targets pretty much operate the same way runlevels do. So, how can we change the default runlevel (or target) at startup? We need to create a new symbolic link, like this:

ln -sf /lib/systemd/system/runlevel3.target /etc/systemd/system/default.target

Default target

This command will change our default runlevel to 3, so the next time we reboot, our system will be in runlevel 3 instead of 5. If you wanted a different runlevel, you would just substitute a different number in place of the “3” in the command.

For reference, the -f switch in that command indicates that the target file should be removed, before creating the new link. You could also just remove it first with a simple rm command.

You can confirm that the change was made successfully with the “systemctl get-default” command again.

Default target

This command will change our default runlevel to 3, so the next time we reboot, our system will be in runlevel 3 instead of 5. If you wanted a different runlevel, you would just substitute a different number in place of the “3” in the command.

For reference, the -f switch in that command indicates that the target file should be removed, before creating the new link. You could also just remove it first with a simple rm command.

You can confirm that the change was made successfully with the “systemctl get-default” command again.

get-default command

Runlevel 3 vs runlevel 5

The two runlevels you will hear about and work with the most are going to be 3 and 5. It basically boils down to this: runlevel 3 is a command line, and runlevel 5 is a graphical user interface.

Of course, not every distribution follows this convention, and your system could be configured by an admin so that these runlevels have even more differences.

But, in general, that’s how it works. If you want to see exactly what services are enabled at both of these runlevels, we cover that in the next section.

List services that are enabled at a particular runlevel

Up till recent years, “chkconfig –list” was the command to list the services that would be enabled at different runlevels. If your operating system is up to date, that command may give you an error or defer you over to systemd.

chkconfig command

If we want to see what services will be started when we boot into graphical mode (runlevel 5), we can run this command:

get-default command

Runlevel 3 vs runlevel 5

The two runlevels you will hear about and work with the most are going to be 3 and 5. It basically boils down to this: runlevel 3 is a command line, and runlevel 5 is a graphical user interface.

Of course, not every distribution follows this convention, and your system could be configured by an admin so that these runlevels have even more differences.

But, in general, that’s how it works. If you want to see exactly what services are enabled at both of these runlevels, we cover that in the next section.

List services that are enabled at a particular runlevel

Up till recent years, “chkconfig –list” was the command to list the services that would be enabled at different runlevels. If your operating system is up to date, that command may give you an error or defer you over to systemd.

chkconfig command

If we want to see what services will be started when we boot into graphical mode (runlevel 5), we can run this command:

systemctl list-dependencies graphical.target

List services

To see the services that run by default on other runlevels, just replace “graphical.target” with the name of the target you need to see.

Under which runlevel will a process run?

If you’d like to see which runlevel(s) a specific service runs at, you can use this command:

systemctl show -p WantedBy [name of service]
For example, if you wanted to see at which runlevel the SSH daemon will run, you would type:
systemctl show -p WantedBy sshd.service

Runlevel for a service

According to the output in the above screenshot, the SSH service will start on runlevels 2, 3, and 4 (multi-user.target)

How to change the runlevel of an application?

As seen above, our SSH service is only running at runlevels 2-4 (multi-user.target). What if we also want it to start when we boot into a graphical interface – runlevel 5 (graphical.target)? We could apply that configuration with the following command:

systemctl enable sshd.service

Enable service at startup

Security issues with runlevels in Linux

As we said earlier in this article, the point of Linux runlevels is to give an administrator control over what services run under certain conditions. Having this type of granular control over your system enhances security since you can be sure that there are no extraneous services running.

The problem can arise when an administrator is unaware of exactly what services are running, therefore doesn’t bother to secure those attack surfaces.

You can use the methods in this guide to configure your default runlevel and to control which applications are running. These practices will not only free up system resources, but also keep your server more secure.

Remember to only use the runlevel you need. For example, there is no sense in starting runlevel 5 (graphical interface) if you only plan to use the terminal.

Changing to a different runlevel will introduce multiple new services, some of which may run completely in the background and you may forget to secure them.

Which runlevel is the best for me?

Determining which runlevel is the best for you all depends on the situation. Generally, you are probably going to be using runlevels 3 and 5 on a regular basis.

If you are comfortable with the command line and don’t need a graphical interface, runlevel 3 (on most distributions) is going to be best for you.

This will keep unnecessary services from running. On the other hand, if you want more of a desktop experience and a graphical interface to use various apps, etc, then runlevel 5 will be your preferred runlevel.

If you need to perform maintenance on a production server, runlevel 1 suits that situation well. This is used to ensure that you are the only one on the server (the network service is not even started), and you can perform your maintenance uninterrupted.

In rare cases, you may even need to use runlevel 4. This would only be in particular situations where you or the system administrator has a custom configured runlevel. We will cover how to do that in the next section.

As you have probably assumed, you won’t (and can’t) run your system in runlevels 0 or 6, but it’s possible to switch to them just to reboot or power off. Doing so shouldn’t ordinarily be necessary since there are other commands that do that for us.

Can we create a new runlevel in Linux?

It is possible to create a new runlevel in Linux, but it’s extremely unlikely that you would ever need to do that. If you were determined to do it anyway, you can start by copying one of the existing systemd targets, and then editing it with your own customizations.

The targets are located in:

/usr/lib/systemd/system
If you wanted to base your new runlevel/target off of graphical.target (runlevel 5), copy that directory to your new target directory.
cp /usr/lib/systemd/system/graphical.target /usr/lib/systemd/system/mynew.target
After that, create a new “wants” directory, like so:
mkdir /etc/systemd/system/mynew.target.wants

And then symlink the additional services from /usr/lib/systemd/system that you want to enable for your new runlevel.

I hope you find the tutorial useful. Keep coming back.

10+ examples for killing a process in Linux

In this tutorial, we will talk about killing a process in Linux with multiple examples. In most cases, it’s as simple as typing “kill” command followed by the process ID (commonly abbreviated as PID).

In the screenshot above, we’ve killed a process with the ID of 1813. If you are a Windows user, it may help to think of the ‘kill’ command as Linux’s equivalent of the ‘End task’ button inside of the Windows task manager. The “ps -e” command will list everything running on your system. Even with a minimal installation, the command will probably output more than 80 results, so it’s much easier to pipe the command to ‘grep’ or ‘more’.

ps -e | grep name-of-process

In the screenshot below, we check to see if SSH is running on the system.

Check if process is running

This also gives us the PID of the SSH daemon, which is 1963.

Pipe to ‘more’ if you want to look through your system’s running processes one-by-one.

List running processes

You can also make use of the ‘top’ command in order to see a list of running processes. This is useful because it will show you how many system resources that each process is using.

List processes using top command

The PID, User, and name of the resource are all identified here, which is useful if you decide to kill any of these services later.

Pay attention to the %CPU and %MEM columns, because if you notice an unimportant process chewing up valuable system resources, it’s probably beneficial to kill it!

Another very efficient away of obtaining the corresponding process ID is to use the ‘pgrep’ command. The only argument you need to supply is the name (or part of the name) of the running process.

Here’s what it looks like when we search for SSH. As you can see, it returns a process ID of 2506.

Using pgrep

Kill a process by PID

Now that we know the PID of the SSH daemon, we can kill the process with the kill command.

sudo kill 1963

Kill process by ID

You can issue a final ‘ps’ command, just to ensure that the process was indeed killed.

ps -e | grep ssh

The results come up empty, meaning that the process was shut down successfully. If you notice that the process is continuing to run – which should not normally happen – you can try sending a different kill signal to the process, as covered in the next session.

Note: It’s not always necessary to use ‘sudo’ or the root user account to end a process. In the former example, we were terminating the SSH daemon, which is run under the root user, therefore we must have the appropriate permissions to end the process.

Default signal sent by the kill command

By default, the kill command will send a SIGTERM signal to the process you specify.

This should allow the process to terminate gracefully, as SIGTERM will tell the process to perform its normal shutdown procedures – in other words, it doesn’t force the process to end abruptly.

This is a good thing because we want our processes to shut down the way they are intended.

Sometimes, though, the SIGTERM signal isn’t enough to kill a process. If you run the kill command and notice that the process is still running, the process may still be going through its shutdown process, or it may have become hung up entirely.

Force killing

To force the process to close and forego its normal shutdown, you can send a SIGKILL signal with the -9 switch, as shown here:

kill -9 processID

Force process killing

It can be tempting to always append the -9 switch on your kill commands since it always works. However, this isn’t the recommended best practice. You should only use it on processes that are hung up and refusing to shut down properly.

When possible, use the default SIGTERM signal. This will prevent errors in the long run, since it gives the process a chance to close its log files, terminate any lingering connections, etc.

Apart from the SIGTERM and SIGKILL signals, there is a slew of other signals that kill can send to processes, all of which can be seen with the -l switch.

Kill signals

The numbers next to the names are what you would specify in your ‘kill’ command. For example, kill -9 is SIGKILL, just like you see in the screenshot above.

For everyday operations, SIGTERM and SIGKILL are probably the only signals you will never need to use. Just keep the others in mind in case you have a weird circumstance where a process recommends terminating it with a different signal.

How to kill all processes by name?

You can also use the name of a running process, rather than the PID, with the pkill command. But beware, this will terminate all the processes running the under the specified name, since kill won’t know which specific process you are trying to terminate.

pkill name-of-process

Check out the example below where we terminate five processes with a single pkill command.

Kill a process using pkill

In this example, we had wanted to only terminate one of those screen sessions, it would’ve been necessary to specify the PID and use the normal ‘kill’ command. Otherwise, there is no way to uniquely specify the process that we wish to end.

How to kill all processes by a user?

You can also use the pkill command to terminate all processes that are running by a Linux user. First, to see what processes are running under a specific user, use the ps command with a -u switch.

ps -u username

List processes running by a user

That screenshot shows us that there are currently 5 services running under the user ‘geek’. If you need to terminate all of them quickly, you can do so with pkill.

pkill -u username

pkill command

How to kill a nohup process?

The nohup process is killed in the same way as any other running process. Note that you can’t grep for “nohup” in the ps command, so you’ll need to search for the running process using the same methods as shown above.

In this example, we find a script titled ‘test.sh’ which has been executed with the nohup command. As you’ll see, finding and ending it is much the same as the examples above.

Kill nohup process

The only difference with the output is that we are notified that the process was terminated. That’s not part of kill, but rather a result from running the script in the background (the ampersand in this example) and being connected to the same tty from which the script was initiated.

nohup ./test.sh &

How to run a process in the background?

The kill command is an efficient way to terminate processes you have running in the background. You’ve already learned how to kill processes in this tutorial, but knowing how to run a process in the background is an effective combination for use with kill command.

You can append an ampersand (&) to your command in order to have it executed in the background. This is useful for commands or scripts that will take a while to execute, and you wish to do other tasks in the meantime.

Run a process in background using ampersand

Here we have put a simple ‘ls’ command into the background. Since it’s the type of command which takes very little time to execute, we’re given more output about it finishing its job directly after.

The output in our screenshot says “Done”, meaning that the job in the background has completed successfully. If you were to kill the job instead, it would show “terminated” in the output.

You can also move a job to the background by pressing Ctrl+Z on your keyboard. The ^Z in this screenshot indicates that Ctrl+Z was pressed and the test.sh script has been moved into the background.

Run a process in background using CTRL+Z

You can see test.sh continuing to run in the background by issuing a ps command.

ps -e | grep test.sh

List background processes

Using screen command

Another way to run a process in the background is to use the ‘screen’ command. This works by creating what basically amounts to a separate terminal window (or screen… hence the name).

Each screen that you create will be given its own process ID, which means that it’s an efficient way of creating background processes that can be later ended using the kill command.

Screen isn’t included on all Linux installs by default, so you may have to install it, especially if you’re not running a distribution meant specifically for servers.

On Ubuntu and Debian-based distributions, it can be installed with the following command:

sudo apt-get install screen

Once screen command is installed, you can create a new session by just typing ‘screen’.

screen

But, before you do that, it’s good to get in the habit of specifying names for your screens. That way, they are easy to look up and identify later. All you need in order to specify a name is the -S switch.

screen -S my-screen-name

Let’s make a screen called “testing” and then try to terminate it with the ‘kill’ command. We start like this:

Screen command

After typing this command and pressing enter, we’re instantly taken to our newly created screen. This is where you could start the process that you wish to have running in the background.

This is especially handy if you are SSH’d into a server and need a process to continue running even after you disconnect.

With your command/script running, you can disconnect from the screen by pressing Ctrl+A, followed by D (release the Ctrl and A key before pressing the D key).

Disconnect from screen

As you can see, screen command has listed the process ID as soon as we detached the screen. Of course, we can terminate this screen (and the command/script running inside of it), by using the kill command.

You can easily look up the process ID of your screen sessions by using this command:

screen -ls

List screen sessions

If we hadn’t named our screen session by using the -S switch, only the process ID itself would be listed. To reattach to the any of the screens listed, you can use the -r switch:

screen -r name-of-screen
or
screen -r PID

Reattach screen process

In the screenshot below, we are killing the screen session we created (along with whatever is being run inside of it), and then issuing another screen -ls command in order to verify that the process has indeed ended.

Kill screen session

How to kill a background process?

In one of our examples in the previous section, we put our tesh.sh script to run in the background. Most background processes, especially simple commands, will terminate without any hassle.

However, just like any process, one in the background may refuse to shut down easily. Our test.sh script has a PID of 2382, so we’ll issue the following command:

kill 2383

In the screenshot, though, you’ll notice that the script has ignored our kill command:

Process killing ignored

As we’ve learned already, kill -9 is the best way to kill a process that is hung up or refusing to terminate.

Force killing

How to kill stopped processes?

It can be useful to kill all your stopped background jobs at once if they have accumulated and are no longer useful to you. For the sake of example, we’re running three instances of our test.sh script in the background and they’ve been stopped:

./test.sh &

Stop a process

You can see these processes with the ps command:

ps -e | grep test.sh

List processes

Or, to just get a list of all the stopped jobs on the system, you can run the jobs command:

jobs

List stopped processes

The easiest way to kill all the stopped jobs is with the following command:

kill `jobs -ps`

Or use the -9 switch to make sure the jobs terminate immediately:

kill -9 `jobs -ps`

Kill stopped processes

The jobs -ps command will list all jobs’ PIDs running in the background, which is why we’re able to combine its output with the kill command in order to end all the stopped processes.

Kill operation not permitted

If you are getting an “operation not permitted” error when trying to kill a process, it’s because you don’t have the proper permissions. Either log in to the root account or use ‘sudo’ (on Debian distributions) before your kill command.

sudo kill PID

sudo kill permission

I hope you find the tutorial useful. Keep coming back.

15+ examples for listing users in Linux

In this post, you will learn about listing users in Linux. Besides this, you will know other tricks about Linux users’ characteristics. There are 2 types of users in Linux, system users who are created by default with the system. On the other hand, there are regular users who are created by system administrators and can log in to the system and use it. Before we start listing users, we need to know where are these users saved on Linux? The users are stored in a text file on the system called the passwd file. This file is located in the /etc directory. The file is located on the following path:

/etc/passwd

In this file, you can find all the information about the users in the system.

List all users

Listing users is a the first step to manage them. This way we will know how many they are and who they are. In Linux, almost everything can be done in various ways and this is no exception.

To list all users, you can use the cat command:

cat /etc/passwd

list all users in Linux

As you can see in the image, there is all the information about the users.

1- In the first field, you will see the user name.

2- Then, a representation of the encrypted password (The x character). The encrypted password is stored in /etc/shadow file.

3- The UID or the user ID.

4- The next field refers to the primary group of the user.

5- Then, it shows user ID info such as the address, email, etc.

6- After this, you will see the home directory of the user.

7- The last field is the shell used by that user.

However, although the information is quite useful but if you only want to list users’ names in a basic way, you can use this command:

cut -d: -f1 /etc/passwd

Listing users in Linux

Now we have the names only by printing the first field of te file only.

List & sort users by name

The above command serves the purpose of listing users on Linux. But what about listing the users in alphabetical order?

To do this, we will use the previous command, but we will add the sort command.

So, the command will be like this:

cut -d: -f1 /etc/passwd | sort

Sort by name

As you can see in the image, the users are shown sorted.

Linux list users without password

It is important to know users who have no password and to take appropriate action. To list users who do not have a password, just use the following command:

sudo getent shadow | grep -Po '^[^:]*(?=:.?:)'

User with no password

The used regex will list all users with no password.

List users by disk usage

If you have a big directory and you want to know which user is flooding it, you can use the du command to get the disk usage.

With this, you can detect which of these users are misusing the disk space.

For it, it is enough to use the following command:

sudo du -smc /home/* | sort -n

List users by disk usage

In this way, you will have the users ordered by the disk usage for the /home directory.

We used the -n for the sort command to sort the output by numbers.

List the currently logged users

To list the currently logged in users, we have several ways to do it. The first method we can use the users command:

users

Users currently logged

It will list the users with open sessions in the system.

But this information is a little basic however, we have another command that gives more details. The command is simply w.

w

Using the w command to list users currently logged

With this command, we can have more information such as the exact time when the session was started and the terminal session he has available.

Finally, there is a command called who. It is available to the entire Unix family. So you can use it on other systems like FreeBSD.

who

The who command

With who command, we also have some information about currently logged in users. Of course, we can add the option -a and show all the details.

who -a

The who command with options

So, this way you know everything about the logged in users.

Linux list of users who recently logged into the system

We saw how to get the currently logged in users, what about listing the login history of users?

You can use the last command to get more info about the logins that took place:

last

The last command

Or the logins of a particular user

last [username]

For example:

last angelo

last command with specific user

These are the user login activity and when it was started and how long it took.

List users’ logins on a specific date or time

What about listing users’ logins on a specific date or time? To achieve this, we use the last command but with the -t parameter:

last -t YYMMDDHHMMSS
For example:
last -t 20190926110509

List users by a specific date

And now all you have to do is choose an exact date & time to list who logged on that time.

List all users in a group

There are 2 ways to list the members of a group in Linux, the easiest and most direct way is to get the users from the /etc/group file like this:

cat /etc/group | grep likegeeks

This command will list users in the likegeeks group.

The other way is by using commands like the members command in Debian based distros. However, it is not installed by default in Linux distributions.

To install it in Ubuntu / Linux Mint 19, just use APT:

sudo apt install members

Or in the case of CentOS:
sudo dnf install members

Once it’s installed, you can run the command then the name of the group you want to list the users to:

members [group_name]

For example:
members avahi

Using the members command

This way you can list users for a group in a Debian based distro. What about a RedHat based distro like CentOS?

You can use the following command:

getent group likegeeks

List users with UID

In Unix systems, each user has a user identifier or ID. It serves to manage and administer accounts internally in the operating system.

Generally, UIDs from 0 to 1000 are for system users. And thereafter for regular users. Always on Unix systems, UID zero belongs to the root users (You can have more than one user with UID of zero).

So now we will list the users with their respective UID using awk.

The command that performs the task is the following:

awk -F: '{printf "%s:%s\n",$1,$3}' /etc/passwd

List users with the UID

As you can see, each user with his UID.

List root users

In a Unix-like system like Linux, there is usually only one root user. If there are many, how to list them?

To do this, we can use this command:

grep 'x:0:' /etc/passwd

root users in the system

Here we are filtering the file to get users with UID of zero (root users).

Another way by checking the /etc/group file:

grep root /etc/group

The root users in Linux

Here we are getting users in the group root from the /etc/group file.

Also, you can check if any user can execute commands as root by checking the /etc/sudoers file:

cat /etc/sudoers

Get the total number of users

To get the total number of users in Linux, you can count lines in /etc/passwd file using the wc command like this:

cut -d: -f1 /etc/passwd | wc -l

List total number of users in Linux

Great! 43 users. But this includes system and regular users. What about getting the number of regular users only?

Easy! Since we know from above that regular users have UID of 1000 or greater, we can use awk to get them:

awk -F: '$3 >= 1000 {print $1}' /etc/passwd

List regular users

Cool!

List sudo users

Linux systems have a utility called sudo that allows you to execute commands as if you were another user who is usually the root user.

This should be handled with care in a professional environment.

Also, it is very important to know which users can run the sudo command. For this, it is enough to list the users that belong to the sudo group.

members sudo

sudo group users

Users in this group can execute commands as super users.

List users who have SSH access

SSH allows users to access remote computers over a network. It is quite secure and was born as a replacement for Telnet.

On Linux by default, all regular users can log in and use SSH. If you want to limit this, you can use the SSH configuration file (/etc/ssh/ssh_config) and add the following directive:

AllowUsers user1 user2 user3
Also, you can allow groups instead of allowing users only using the AllowGroups directive:
AllowGroups group1 group2 group3

These directives define who can access the service. Don’t forget to restart the SSH service.

List users who have permissions to a file or directory

We can give more than one user permission to access or modify files & directories in two ways.

The first method is by adding users to the group of the file or the directory.

This way, we can list the group members using the members utility as shown above.

Okay, but what if we just want this user to have access to this specific file only (Not all the group permissions)?

Here we can set the ACL for this file using setfacl command like this:

setfacl -m u:newuser:rwx myfile

Here we give the user called newser the permission for the file called myfile the permissions of read & write & execute.

Now the file can be accessed or modified by the owner and the user called newuser. So how to list them?

We can list them using the getfacl command like this:

getfacl myfile

This command will list all users who have permissions for the file with their corresponding permissions.

List locked (disabled) users

In Linux, as a security measure, we can lock users. This as a precaution if it is suspected that the user is doing things wrong and you don’t want to completely remove the user and just lock him for investigation.

To lock a user, you can use the following command:

usermod -L myuser

Now the user named myuser will no longer to able to login or use the system.

To list all locked users of the system, just use the following command:

cat /etc/passwd | cut -d : -f 1 | awk '{ system("passwd -S " $0) }' | grep locked

This will print all locked users including system users. What about listing regular users only?

As we saw above, using awk we can get locked regular users like this:

awk -F: '$3 >= 1000 {print $1}' /etc/passwd | cut -d : -f 1 | awk '{ system("passwd -S " $0) }' | grep locked

Very easy!

Listing remote users (LDAP)

Okay, now can list all system users (local users), but what about remote users or LDAP users? Well, we can use a tool like ldapsearch, but is there any other way?

Luckily yes! You can list local & remote users with one command called getent

getent passwd

This command lists both local system users and LDAP or NIS users or any other network users.

You can pipe the results of this command to any of the above-mentioned commands the same way.

Also, the getent command can list group accounts like this:

getent group

You can check the man page of the command to know the other databases the command can search in.

Conclusion

Listing users in the Linux system was fun! Besides this, we have learned some tips about users and how to manage them in different ways.

Finally, this knowledge will allow a better administration of the users of the system.

I hope you find the tutorial useful. keep coming back.

December 22, 2019

Pardus e-posta listeleri arşivi

Değerli dostumuz @caylakpenguen, yine ilginç bir konuya değinmiş. Dostumuz, bu kez, “Pardus e-posta listeleri arşivi” konusunu gündeme getiriyor. Böylelikle 2012 yılına kadar geçen sürede oldukça aktif olan e-posta listeleri ve kullanıcı topluluğunu gündeme taşıyor. Bilindiği gibi, bu tarihten sonra Pardus çekirdek ekibi dağıldı ve ekipten çoğu kişi Amerika veya Avrupa ülkelerine taşındı. Pardus e-posta liste arşivlerini yedeklemiş ve yedek diskimde unuttuğunu söyleyen @caylakpenguen, yedek diskini karıştırırken e-posta arşivlerine rastgeldiğini ve LKD yönetimi ile görüşüp arşivin LKD Liste arşivine eklenmesini sağladığını söylüyor. Bu vesile ile Doruk Fişek’e selam ettiğini bildiriyor.

Değerli dostumuz @caylakpenguen‘in bildirdiğine göre, Pardus e-posta listeleri arşivine buradan erişebilirsiniz.

December 19, 2019

Linux Mint 19.3 “Tricia”ya yükseltme nasıl yapılabilir?

Bilindiği gibi, Linux Mint 19.3 “Tricia” 18 Aralık 2019 Çarşamba günü Clement Lefebvre tarafından duyuruldu. Ubuntu 18.04 LTS’ye dayalı olarak gelen yeni sürüm; Cinnamon 4.4, MATE 1.22.2, Xfce 4.14 masaüstü ortamlarıyla kullanıma sunuluyor. 5.0 Linux çekirdeği üzerine yapılandırılmış olan sistem; kullanıcıya olası sorunları bildiren proaktif sistem raporları özelliğiyle geliyor. 2023’e kadar desteklenecek olan sistemi duyuran Lefebvre; artık geliştirme ekibinin 2020 yılına kadar, yeni bir temel üzerinde çalışmaya başlamayacağını söyledi. Lefebvre, bunun ardından “Linux Mint 19.3 “Tricia”ya nasıl yükseltme yapılabilir?” başlıklı yazısını yazdı. Buna göre, bu yazıda Linux Mint 19.3 “Tricia”ya nasıl yükseltme yapılabileceğini ele alacağız.

Linux Mint 19.3 “Tricia”ya yükseltme yapmak isteyen kullanıcılar, ilkin güncelleme yöneticisini açarak, düzenle sekmesi tıklanmalıdır.

Resimde de görüldüğü gibi, düzenle sekmesinin en altına “Linux Mint 19.3 Tricia” sürümüne yükselt seçeneği gelmiştir. Şimdi de bu seçeneği tıklayınız.

“Linux Mint 19.3 Tricia” sürümüne yükselt seçeneğne tıkladığınızda yukarıdaki yükseltme penceresi açılır. İlkin giriş bölümündesiniz, “sonraki” butonuna tıklayarak aşamalar üzerinde ilerleyiniz. En son, yükseltmeyi onaylayıp, yükseltme işlemine başlayacaksınız.

Yukarıdaki resimde görüldüğü gibi yükseltme sürecini bu şekilde takip edebilirsiniz. Kısa bir süre sonra yükseltme işleminin sona erdiği belirtilerek, bilgisayarınızı yeniden başlatmanız istenecektir. Bilgisayarınızı yeniden başlatıp, Linux Mint 19.3 “Tricia”yı kullanmaya başlayabilirisiniz.

December 18, 2019

Linux Mint 19.3 “Tricia” Sürümündeki Yenilikler Neler?

Linux Mint ekibi, Linux Mint 19.3 “Tricia” kararlı sürümünü çıkardığını blogunda duyurdu. Linux Mint 19.3’teki Yenilikler Neler? Linux Mint 19.3’ü kurduktan sonra sağ alt köşede yeni bir simge göreceksiniz. Sisteminizle ilgili olası durumları (eksik dil paketi, multimedya kodek, yeni bir donanım veya Linux Mint’in yeni sürümü çıktığında) size bildirecek. Sistem Raporları penceresinden bu

Linux Mint 19.3 "Tricia" Güncellemesi Nasıl Yapılır?

Linux Mint 19, 19.1 veya 19.2 sürümünden 19.3 sürümüne güncelleme işlemi için aşağıdaki adımları takip edin. Linux Mint 19, 19.1 veya 19.2'den 19.3’e Güncelleme Nasıl Yapılır? 1- Sisteminizde herhangi bir sorunla karşılaşma ihtimalinize karşı, yükseltme işlemini yapmadan önce Timeshift ile sisteminizin yedeğini (snapshot) alın. 2- Yükseltme işlemi için sisteminizi hazırlayın. Ekran

Linux Mint 19.3 "Tricia" XFCE Çıktı

Linux Mint ekibi Linux Mint 19.3 "Tricia" XFCE sürümünü yayınladı. Linux Mint 19.3 LTS desteğine sahip olan sürüm 2023'e kadar desteklenecek. Güncellenmiş yazılımları ve yeni özellikleri içerisinde barındıran bu sürüm, kullanımınızı daha rahat hale getiriyor. Bu yeni sürümde birçok geliştirme mevcut. Bunların neler olduğunu şu adresten öğrenebilirsiniz. Linux Mint 19.3 "Tricia" XFCE

Linux Mint 19.3 "Tricia" MATE Çıktı

Linux Mint ekibi Linux Mint 19.3 "Tricia" MATE sürümünü yayınladı. Linux Mint 19.3 LTS desteğine sahip olan sürüm 2023'e kadar desteklenecek. Güncellenmiş yazılımları ve yeni özellikleri içerisinde barındıran bu sürüm, kullanımınızı daha rahat hale getiriyor. Bu yeni sürümde birçok geliştirme mevcut. Bunların neler olduğunu şu adresten öğrenebilirsiniz. Linux Mint 19.3 "Tricia" MATE

Linux Mint 19.3 “Tricia” Cinnamon Çıktı

Linux Mint ekibi Linux Mint 19.3 "Tricia" Cinnamon sürümünü yayınladı. Linux Mint 19.3 LTS desteğine sahip olan sürüm 2023'e kadar desteklenecek. Güncellenmiş yazılımları ve yeni özellikleri içerisinde barındıran bu sürüm, kullanımınızı daha rahat hale getiriyor. Bu yeni sürümde birçok geliştirme mevcut. Bunların neler olduğunu şu adresten öğrenebilirsiniz. Linux Mint 19.3 "Tricia"

openSUSE'ye Slimjet nasıl kurulur?

Slimjet browserin openSUSE deposunda veya başka yerde yapılmış bir RPM paketi yok. Ancak Slimjet geliştiricileri daha kolay bir çözüm bulmuş, tarayıcının bütün ögelerini bir tar dosyası içine yerleştirmiş ve çalışır hale getirmişler. Bu şekilde sadece openSUSE değil, neredeyse bütün dağıtımlara kurulabiliyor. Benim openSUSE için belirlememin nedeni, sadece openSUSE'de denememden kaynaklanıyor.

Feeds