February 16, 2020

15+ examples for yum update command

Yum is a package manager used on Red Hat, CentOS, and other Linux distributions that use RPM Package Manager. Yum is used to install, update, delete, or otherwise manipulate the packages installed on these Linux systems. In this tutorial, we will cover the yum update command – what it is, how to use it, and all the different commands you may need to know when you wish to upgrade the installed packages on your system. Yum update is the command used to update applications installed on a system. If the command is run without any package names specified, it will update every currently installed package on the system. When running this command, yum will begin by checking its repositories for updated version of the software your system currently has installed. The screenshot below shows the type of output you’ll typically see when first issuing the yum update command.

How does yum update work?

Yum update is the command used to update applications installed on a system. If the command is run without any package names specified, it will update every currently installed package on the system.

yum update

When running this command, yum will begin by checking its repositories for updated version of the software your system currently has installed. The screenshot below shows the type of output you’ll typically see when first issuing the yum update command.

yum update command

As you can see, the output from yum first lists the repositories it’s querying, which are the default ones for CentOS: AppStream, Base, and Extras. Below that, yum lists the various packages which it has found updates for.

At the tail end of this output, yum will display the “Transaction Summary,” which shows the total number of packages that are to be installed and upgraded.

yum update summary

In this example, 166 packages are being upgraded, and 6 new packages are being installed.

In case you’re wondering why new packages are being installed when we are only supposed to be upgrading applications, some new software packages may have become part of this Linux distribution, or some upgraded applications may rely on extra packages that are not yet installed.

Once you review the list of software that yum plans to upgrade, you can confirm these changes by typing “y” and hitting enter.

Yum will then perform the upgrades, which may take some time depending on the speed of your connection and the system itself.

Once it has finished, you’ll get a final summary which will list all the packages that were successfully upgraded, as well as any errors that may have been encountered.

yum update complete

Update without gpg checking

GPG keys are used to verify the authenticity of an RPM package. The –nogpgcheck option in yum will instruct it to skip checking GPG signatures on packages. This is useful in cases where you have an unsigned package or you just don’t have the GPG key.

yum update --nogpgcheck

This is a quick solution if you encounter an error like “Package NameOfPackage.rpm is not signed .. install failed!” when running the normal yum update command. The –nogpgcheck option will ignore this warning and proceed with upgrading the package anyway.

Update from a local repo

It’s possible to set up local repositories for yum to query when it does an update. This is often done if you want to use yum to update packages that aren’t included in the default repos, or if you need to upgrade an offline system.

First, place all your updated RPM files in a new folder. In this example, we’ll use /root/rpms.

Next, navigate to the following directory where you can see all the repo files for yum:

cd /etc/yum.repos.d

Local repo files

To set up a local repo, create a new file in this directory.

vi MyRepo.repo

Inside your repo file, configure it in this format, changing the lines as necessary:

[MyRepo]

name=My Local Repo

baseurl=file:///root/rpms

enabled=1

gpgcheck=0

The big difference between a local repo and a remote repo is in the “baseurl” line, where the file:// protocol is specifying a local file, as opposed to the remote protocols http:// or ftp://

Once the file has been saved, apply the correct permissions:

chmod 644 MyRepo.repo

The repository should now be ready to use. Be sure clear yum’s cache before attempting a yum update command:

yum clean all

Show patches

Yum can display available security patches, without installing them, with this command:

yum updateinfo list security

List specific patches

If no output is returned, like in the screenshot above, this means there are no security patches available for any installed software on your system.

Update a single package

If you need to update a certain package without running an update for every application installed, just specify the name of the package in your yum update command.

yum update name-of-package

Multiple packages can be specified, separated by a space. You need to have the name of the package typed perfectly in order for yum to find it in its repositories; if you’re not sure of a package name, first check what packages are currently eligible for updates:

yum check-update

Update all but one package

If you need to run the yum update command but you wish to exclude a package from being updated, you can specify the –exclude option.

A common situation where administrators may find this necessary is with kernel updates, since these are major updates that could cause unpredictable errors on a production server. However, they may still want to run the command to update less sensitive applications.

To exclude a package (in this example, those related to the kernel):

yum update --exclude=kernel*

The asterisk acts as a wildcard, in case there are multiple related packages or you don’t know the full name of the package.

Alternatively:

yum update -x 'kernel*'

Exclude multiple packages

You can exclude multiple packages with more –exclude flags.

yum update --exclude=kernel* --exclude=httpd

Use this flag as in the example above, or the -x flag, as many times as needed.

Check when last yum update ran

To see a list of yum transactions, with the date and time they were ran, use the yum history command.

yum history

Check yum update history

In the screenshot above, you can see that the last time yum updated software was on January 4th.

Rollback (revert) update

A great feature of yum is that it allows you to undo a recent update, thus restoring the upgraded packages to their previous versions.

Each yum action (install, update, erase, etc) is assigned a transaction ID, and this ID must be specified when undoing a yum update. To see a list of transaction IDs for recent yum operations, use this command:

yum history

List yum history

In the screenshot above, you can see the last operation run with yum was to install the httpd package. Undoing an installation or an update works the same way, so in this example, we will undo this recent installation of httpd. As shown in the screenshot, this transaction has an ID of 7.

To undo this change and roll back the program to its previous version, issue this command:

yum history undo 7

Undo yum update

As usual, yum will summarize the changes to be made and ask if you’d like to proceed with a Y/N prompt. Enter Y and the specified transaction will be undone.

yum undo report

Clean up a failed yum update (Troubleshooting)

If one or more packages fail to upgrade successfully when you run the yum update command, the system can end up with duplicate packages installed (2 versions of the same program).

Sometimes, following the rollback instructions in the section above can fix the problem. If that doesn’t work, you can remove duplicate packages on your system with this command:

package-cleanup --dupes

Yum stores a cache of information for packages, metadata, and headers. If you encounter an error, clearing yum’s cache is a good first step in troubleshooting. Use the following command to do that:

yum clean all

yum clean command

Skip errors

When updating or installing a package, that package may require additional software in order to run correctly. Yum is aware of these dependencies and will try to resolve them during updates by installing or upgrading the extra packages that are needed.

If yum has trouble installing the necessary dependencies, it produces an error and doesn’t proceed further. This is a problem if you have other packages that need to be updated.

To instruct yum to proceed with updating other packages and skipping the ones with broken dependencies, you can specify the –skip-broken command in your yum update command.

yum update --skip-broken

Get a list of packages that need an update

Running the yum update command as normal, with no additional options, will output a list of available updates.

yum update

If you’d like to see some additional information about the package updates available, type this command:

yum updateinfo

To see information about security updates that are available for the system, type this command:

yum updateinfo security

Difference between yum check updates and list update

Although the two commands sound similar, so there is a difference between checking for updates and listing updates in yum.

yum list updates

The command to list updates, shown above, will list all the packages in the repositories that have an update available. Keep in mind that some of the packages in the repositories may not even be installed on your system.

yum check-update

The command to check for updates, seen above, is a way to check for updates without prompting for interaction from the user. This is the command you would opt for if you were coding a script to check for updates, for example.

The check-update command will return an exit value of 100 if there are packages that have updates available, and it will return an exit value of 0 if there are no available updates.

A value of 1 is returned if an error is encountered. Use these exit codes to code your script accordingly.

Notify when updates are available

There are a few packages that can help manage yum updates on your system. Some can even notify administrators when yum has updates that are available to be installed. One such service is called yum-cron.

Install yum-cron using yum:

yum install yum-cron

Set the yum-cron service to start at boot:

systemctl enable yum-cron.service

systemctl start yum-cron.service

Configure the settings for yum-cron inside the configuration file using vi or your preferred text editor:

vi /etc/yum/yum-cron.conf

In this file, you can specify if the updates should be automatically applied or not. If you’d only like to receive notifications, fill out the email information inside the configuration file. Yum-cron will then send you an email anytime there are updates available for your system.

apply_updates = no #don’t apply updates automatically

email_from = root@localhost

email_to = admin@example.com

email_host = localhost

What port does yum update use

Yum uses port 80 when checking for updates. If you look inside the repository files on your system, you’ll see that all of the links inside begin with http.

If you need to create a rule in your firewall to allow yum to function, you need to allow port 80.

Yum update vs upgrade

So far, we have only talked about the yum update command in this tutorial, but there’s another very similar command: yum upgrade.

yum upgrade

There is a small difference between these two commands. Yum update will update the packages on your system, but skip removing obsolete packages.

Yum upgrade will also update all the packages on your system, but it will also remove the obsolete packages.

This inherently makes yum update the safer option, since you don’t have to worry about accidentally removing a necessary package when updating your software.

Use some discretion when issuing the yum upgrade command, since it may not preserve some packages that you are still using.

At last, I hope you find the tutorial useful.

keep coming back.

February 15, 2020

Python ile ufak bir resim boyutlandırma betiği

Pampalar şimdi malumunuz ben linux kullanıcısıyım bu blogumdan da belli oluyordur. :P Ama tamamen alışkanlık ve kod yazmayı sevmekten dolayı konsol kullanmasını seviyorum. Geçenlerde okulda bi resim boyutlandırma ve video çevirme işlemi lazım oldu windowstaki gibi iki saat programlarla cebelleşmek yerine hemen konsoldan birer satırlık (ffmpeg ve imagamagick şahaneleri ile) kodla işimi gördüm. Burda amaç havalı görünmek değildi işi hızla bitirmekti.

Müdür yardımcımız hemen yapıştırdı siz linux kullanıyonuz ya dedi :) yani konsol falan :) Bunun konsol haricinde de yapılabildiğini anlattım ama seviyorum böyle dedim ama yine de tuttum bir de python betiği yazdım.
Adını da feriha koymadım pyresim koydum ( isim bulamadım salladım) github a da koydum ha :) alın bakın kullanın falan diye :)
https://github.com/birtanyildiz/pyresim



February 11, 2020

İlledelinux Ubuntu System

Ubuntu tabanlı, "Base System" türünde hazırlamış olduğum iso kalıbı kullanıma hazır. Bu kalıp da "İlledelinux Debian System" gibi temel yazılımlar dışında paket seçimi kullanıcı tercihine bırakıldı. Ancak otuzun üzerinde uygulamalar uyarlanarak, hızlı kullanım için bir çoğu panelde "Özel Uygulamalar" adlı menü üzerinde toplandı. Özel uygulamalar menüsü şunları içeriyor. Yine panele eklemiş

February 09, 2020

Matplotlib tutorial (Plotting Graphs Using pyplot)

Matplotlib is a library in python that creates 2D graphs to visualize data. Visualization always helps in better analysis of data and enhance the decision-making abilities of the user. In this matplotlib tutorial, we will plot some graphs and change some properties like fonts, labels, ranges, etc. First, we will install matplotlib, then we will start plotting some basics graphs. Before that, let’s see some of the graphs that matplotlib can draw. There are a number of different plot types in matplotlib. This section briefly explains some plot types in matplotlib. A line plot is a simple 2D line in the graph.

Plot Types

There are a number of different plot types in matplotlib. This section briefly explains some plot types in matplotlib.

Line Plot

A line plot is a simple 2D line in the graph.

Contouring and Pseudocolor

We can represent a two-dimensional array in color by using the function pcolormesh() even if the dimensions are unevenly spaced. Similarly, the contour()  function does the same job.

Histograms

To return the bin counts and probabilities in the form of a histogram, we use the function hist().

Paths

To add an arbitrary path in Matplotlib we use matplotlib.path module.

Streamplot

We can use the streamplot() function to plot the streamlines of a vector. We can also map the colors and width of the different parameters such as speed time etc.

Bar Charts

We can use the bar() function to make bar charts with a lot of customizations.

Other Types

Some other examples of plots in Matplotlib include:

  • Ellipses
  • Pie Charts
  • Tables
  • Scatter Plots
  • GUI widgets
  • Filled curves
  • Date handling
  • Log plots
  • Legends
  • TeX- Notations for text objects
  • Native TeX rendering
  • EEG GUI
  • XKCD-style sketch plots

Installation

Assuming that the path of Python is set in environment variables, you just need to use the pip command to install matplotlib package to get started.

Use the following command:

pip install matplotlib

In my system, the package is already installed. If the package isn’t already there, it will be downloaded and installed.

To import the package into your Python file, use the following statement:

import matplotlib.pyplot as plt

Where matplotlib is the library, pyplot is a package that includes all MATLAB functions to use MATLAB functions in Python.

Finally, we can use plt to call functions within the python file.

Vertical Line

To plot a vertical line with pyplot, you can use the axvline() function.

The syntax of axvline is as follows:

plt.axvline(x=0, ymin=0, ymax=1, **kwargs)

In this syntax: x is the coordinate for x axis. This point is from where the line would be generated vertically. ymin is the bottom of the plot, ymax is the top of the plot. **kwargs are the properties of the line such as color, label, line style, etc.

import matplotlib.pyplot as plt

plt.axvline(0.2, 0, 1, label='pyplot vertical line')

plt.legend()

plt.show()

In this example, we draw a vertical line. 0.2 means the line will be drawn at point 0.2 on the graph. 0 and 1 are ymin and ymax respectively.

label one of the line properties. legend() is the MATLAB function which enables label on the plot. Finally, show() will open the plot or graph screen.

Horizontal Line

The axhline() plots a horizontal line along. The syntax to axhline() is as follows:

plt.axhline(y=0, xmin=0, xmax=1, **kwargs)

In the syntax: y is the coordinates along y axis. These points are from where the line would be generated horizontally. xmin is the left of the plot, xmax is the right of the plot. **kwargs are the properties of the line such as color, label, line style, etc.

Replacing axvline() with axhline() in the previous example and you will have a horizontal line on the plot:

import matplotlib.pyplot as plt

ypoints = 0.2

plt.axhline(ypoints, 0, 1, label='pyplot horizontal line')

plt.legend()

plt.show()

Multiple Lines

To plot multiple vertical lines, we can create an array of x points/coordinates, then iterate through each element of array to plot more than one line:

import matplotlib.pyplot as plt

xpoints = [0.2, 0.4, 0.6]

for p in xpoints:

plt.axvline(p, label='pyplot vertical line')

plt.legend()

plt.show()

The output will be:

The above output doesn’t look really attractive, we can use different colors for each line as well in the graph.

Consider the example below:

import matplotlib.pyplot as plt

xpoints = [0.2, 0.4, 0.6]

colors = ['g', 'c', 'm']

for p, c in zip(xpoints, colors):

plt.axvline(p, label='line: {}'.format(p), c=c)

plt.legend()

plt.show()

In this example, we have an array of lines and an array of Python color symbols. Using the zip() function, both arrays are merged together: the first element of xpoints[] with the first element of the color[] array. This way, first line = green, second line = cyan, etc.

The braces {} act as a place holder to add Python variables to printing with the help of the format() function. Therefore, we have xpoints[] in the plot.

The output of the above code:

Just replace the axvline() with axhline() in the previous example and you will have horizontal multiple lines on the plot:

import matplotlib.pyplot as plt

ypoints = [0.2, 0.4, 0.6, 0.68]

colors = ['b', 'k', 'y', 'm']

for p, c in zip(ypoints, colors):

plt.axhline(p, label='line: {}'.format(p), c=c)

plt.legend()

plt.show()

The code is the same, we have an array of four points of y axis and different colors this time. Both arrays are merged together with zip() function, iterated through the final array and axhline() plots the lines as shown in the output below:

Save Figure

After plotting your graph, how to save the output plot?

To save the plot, use savefig() of pyplot.

plt.savefig(fname, **kwargs)

Where fname is the name of the file. The destination or path can also be specified along with the name of the file. The kwargs parameter is optional. It’s used to change the orientation, format, facecolor, quality, dpi, etc.

import matplotlib.pyplot as plt

ypoints = [0.2, 0.4, 0.6, 0.68]

colors = ['b','k','y', 'm']

for p, c in zip(ypoints, colors):

plt.axhline(p, label='line: {}'.format(p), c=c)

plt.savefig('horizontal_lines.png')

plt.legend()

plt.show()

The name of the file is horizontal_lines.png, the file is saved where your python file is stored:

Multiple Plots

All of the previous examples were about plotting in one plot. What about plotting multiple plots in the same figure?

You can generate multiple plots in the same figure with the help of the subplot() function of Python pyplot.

matplotlib.pyplot.subplot(nrows, ncols, index, **kwargs)

In arguments, we have three integers to specify, the number of plots in a row and in a column, then at which index the plot should be. You can consider it as a grid and we are drawing on its cells.

The first number would be nrows the number of rows, the second would be ncols the number of columns and then the index. Other optional arguments (**kwargs) include, color, label, title, snap, etc.

Consider the following code to get a better understanding of how to plot more than one graph in one figure.

from matplotlib import pyplot as plt

plt.subplot(1, 2, 1)

x1 = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]

y1 = [5, 10, 15, 20, 25, 30, 35, 40, 45, 50]

plt.plot(x1, y1, color = "c")

plt.subplot(1, 2, 2)

x2 = [40, 50, 60, 70, 80, 90, 100]

y2 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x2, y2, color = "m")

plt.show()

The first thing is to define the location of the plot. In the first subplot, 1, 2, 1 states that we have 1 row, 2 columns and the current plot is going to be plotted at index 1. Similarly, 1, 2, 2 tells that we have 1 row, 2 columns but this time the plot at index 2.

The next step is to create arrays to plot integer points in the graph. Check out the output below:

This is how vertical subplots are drawn. To plot horizontal graphs, change the subplot rows and columns values as:

plt.subplot(2, 1, 1)

plt.subplot(2, 1, 2)

This means we have 2 rows and 1 column. The output will be like this:

Now let’s create a 2×2 grid of plots.

Consider the code below:

from matplotlib import pyplot as plt

plt.subplot(2, 2, 1)

x1 = [40, 50, 60, 70, 80, 90, 100]

y1 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x1, y1, color = "c")

plt.subplot(2, 2, 2)

x2 = [40, 50, 60, 70, 80, 90, 100]

x2 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x2, y2, color = "m")

plt.subplot(2, 2, 3)

x3 = [40, 50, 60, 70, 80, 90, 100]

y3 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x3, y3, color = "g")

plt.subplot(2, 2, 4)

x4 = [40, 50, 60, 70, 80, 90, 100]

y4 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x4, y4, color = "r")

plt.show()

The output is going to be:

In this example, 2,2,1 means 2 rows, 2 columns, and the plot will be at index 1. Similarly, 2,2,2 means 2 rows, 2 columns, and the plot will be at index 2 of the grid.

Font Size

We can change the font size of a plot with the help of a function called rc(). The rc() function is used to customize the rc settings. To use rc() to change font size, use the syntax below:

matplotlib.pyplot.rc('fontname', **font)

or

matplotlib.pyplot.rc('font', size=sizeInt)

The font in the syntax above is a user-defined dictionary, that specifies the weight, font family, font size, etc. of the text.

plt.rc('font', size=30)

This will change the font to 30, the output is going to be:

Axis Range

The range or limit of the x and y axis can be set by using the xlim() and ylim() functions of pyplot respectively.

matplotlib.pyplot.xlim([starting_point, ending_point])

matplotlib.pyplot.ylim([starting_point, ending_point])

Consider the example below to set x axis limit for the plot:

from matplotlib import pyplot as plt

x1 = [40, 50, 60, 70, 80, 90, 100]

y1 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x1, y1)

plt.xlim([0,160])

plt.show()

In this example, the points in the x axis will start from 0 till 160 like this:

Similarly, to limit y axis coordinates, you will put the following line of code:

plt.ylim([0,160])

The output will be:

Label Axis

The labels for x and y axis can be created using the xlabel() and ylabel() functions of pyplot.

matplotlib.pyplot.xlabel(labeltext, labelfontdict, **kwargs)

matplotlib.pyplot.ylabel(labeltext, labelfontdict, **kwargs)

In the above syntax, labeltext is the text of the label and is a string, labelfont describes the font size, weight, family of the label text and it’s optional.

from matplotlib import pyplot as plt

x1 = [40, 50, 60, 70, 80, 90, 100]

y1 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x1, y1)

plt.xlabel('Like Geeks X Axis')

plt.ylabel('Like Geeks Y Axis')

plt.show()

In the above example, we have regular x and y arrays for x and y coordinates respectively. Then plt.xlabel() generates a text for x axis and plt.ylabel() generates a text for y axis.

Clear Plot

The clf() function of the pyplot clears the plot.

matplotlib.pyplot.clf()

In the clf() function, we don’t have any arguments.

from matplotlib import pyplot as plt

x1 = [40, 50, 60, 70, 80, 90, 100]

y1 = [40, 50, 60, 70, 80, 90, 100]

plt.plot(x1, y1)

plt.xlabel('Like Geeks X Axis')

plt.ylabel('Like Geeks Y Axis')

plt.clf()

plt.show()

In this code, we created a plot and defined labels as well. After that we have used the clf() function to clear the plot as follows:

I hope you find the tutorial useful to start with matplotlib.

Keep coming back.

February 08, 2020

Birden çok Firefox nasıl yapılır?

Farklı işlerde kullanmak için farklı tarayıcı ihtiyacı olabiliyor, bir tarayıcıda kullandığınız eklenti ve ayarları diğer işlerde olmasını istemeyebilirsiniz. Örneğin web çalışması yaptığınız, filmlere göre ayarladığınız veya herhangi özel bir kullanım için özelleştirdiğiniz tarayıcının diğer kullanım alanları için yeniden ayarlama durumunda kalabilirsiniz. Geçen anlatımda aynı sistemde aynı

Why You Should Protect Your Research Papers in Linux

Writing a research paper is a huge task in itself. When you think about how easy it is for other people to copy or plagiarize the paper you worked hard on, this should encourage you to learn how to protect your research papers and other documents on your Linux computer. To do this, you should improve network security and even the physical security of the data stored on your computer. To ensure that hackers won’t get to your files, it’s recommended to have several security measures in place. From the research paper files themselves to the other files in your device, there’s a lot for you to keep secure. One of the most secure ways to protect your files is through encryption. This is a lot like keeping your files in a locked safe. You can even use encryption for safeguarding your website and other online data. Encrypting involves generating a password or encryption key. The only person who can access encrypted data is the one holding the password. In this case, that would be you.

Encrypting your research papers and other files

For Linux, there are several tools you can use to create a secure container called an “encrypted volume” to keep all of your files and research papers. Rather than encrypting your files individually, placing all of them in an encrypted volume makes it more convenient for you to find your files.

However, if you copy your files on a flash drive or on another computer, you would have to encrypt them again to guarantee their safety.

With all the software available for Linux, you have to choose the right one. Some of these software options offer important features that allow you to customize your security settings.

Some tools even allow you to permanently encrypt your computer’s entire disk along with all the files stored on it, temporary files, installed programs, and even operating system files.

Finding the right tools

Linux operates on three levels of access to files on your computer. Understand that on Linux, all stored content including programs are considered files and you might want to remember this when setting up access control.

Security levels consist of User, person who created file or owner of device, a user group collectively granted access to a specific file(s), and Other or open.

Access permission is also set at three levels. First up is Read permission by which you can open and read a file but cannot modify. Next is Write permission that allows you to modify contents in a file.

To protect your valuable research paper, do not leave this level open. The final permission level is Execute, which allows modification of program coding and by extension the possibility to corrupt your stored data. Credentials for Read and Execute levels must be seriously protected, and known to you only or other trusted person.

Properly applied, these levels of security in Linux OS will protect your data against hacking and corruption.

A corrupted assignment at the wrong time might leave you asking the inevitable question; can I pay to do my assignment? Well, it it come to this, EduBirdie a reputable online service can help you salvage your loss and provide you with professionally-written research papers and other documents.

If you believe that your research papers and all other files you save on your computer are important, then security should be your first priority. If a person successfully hacks into your computer and corrupts your files, it’s a disaster.

Of course, it’s better for you to protect the files you worked hard on too. Since there are several options for Linux to encrypt your files and keep them safe, you need to do a bit of research to find the right one.

Search for the features you need and when you find a tool, app or software with this feature, dig deeper. The key is to learn everything you can about the digital tool before downloading or purchasing it.

Finding the right tools

Safely using encryption on Linux

Encryption alone won’t do much good if system root access is not protected. Linux restricts what other systems call administrator access privileges to any user other than the authorized custodians.

This protects the system’s core and stored files from inadvertent attacks by worms and viruses through careless web browsing.

Next, you need to choose easy to use but highly secure tools for encrypting your files. One way is to create containers which are similar to zipped files and encrypt these using the VeraCrypt tool. The beauty of this container is that it is also portable and accessible on both Linux and Windows.

Alternatively, why not just encrypt the whole disk? This however presupposes that you won’t leave your computer on and unattended, as your files will be unprotected. Having encrypted containers or one containing all others is the best way to keep your data safe.

You can also encrypt on Linux using Files by choosing a compression format for your document, add password, and compress.

Another simple way to protect your research data is to secure your account in the first place using a strong password. All these are features of Linux operating system for security of your data

Storing research papers and other data is of the essence, especially if you placed a lot of effort and time into creating them.

While encryption helps reduce the safety risk greatly, this won’t eliminate the risk completely. Therefore, you should also learn how to protect the most sensitive information you have stored on your computer.

If you think that there is anything on your computer that can be potentially damaging to you that you don’t really need to store on your computer, delete and keep it clean.

For those who can’t delete, use the best encryption tool to keep them safe. While encryption isn’t completely safe, it’s still the best option. To increase your files’ safety even more, here are some tips to keep in mind:

  • If you plan to walk away from your computer, close all of your files. Even if it’s your home computer if you plan to leave it overnight, close all of your files to prevent remote intruders from accessing them.
  • Even when putting your computer to sleep or in hibernation mode, close all of your files.
  • Never allow other people to use your computer without closing your files first.
  • Also, close all of your files before you insert a flash disk or other external storage devices into your computer. This applies even if the storage device belongs to someone you know.
  • If you store encrypted volumes on a flash disk, never leave it lying around. Keep it in a secure place to ensure the safety of the data stored within it.

These practical tips are essential to ensure the safety of your files. Combining encryption with other safety features is the best way to protect your data in Linux.

Linux performance

Linux OS has more pluses over other operating systems that you stand to gain from besides security of your research data. It is an open source OS that you can modify and create a personalized version without license restrictions.

The General Public License that comes with Linux will save you a ton of dollars on fees and cost of software as it is compatible with many standard Unix packages and can process most file formats.

A Linux operating system is a smart option in many ways for storing your valuable data. Remember to work with a good programmer to set it right for you to enjoy maximum performance and security.

Conclusion

As a Linux user, the application and tool options grow each day. Now, you can find an app to help you encrypt your data and keep it safe from those who want to corrupt or copy them. Your research papers and all the other files stored on your computer belong to you and no one else.

If you want to keep it that way, learning how to protect your data is key. These days, the huge collection of Linux tools and apps has great potential.

Once you find the tool you need, you can install it then start using it right away. That way, you won’t have to worry about the safety of your digital data.

February 07, 2020

Birden çok Chromium nasıl yapılır?

Bir sistemde birbirinden etkilenmeden birden çok Chromium tarayıcı kullanmak mümkündür. Bunun için Chromium'un yeni bir kaynak dizin oluşturma komutundan yararlanacağız. Bende olduğu gibi sizde de böyle bir gereksinim varsa hemen anlatıma başlayalım. Önce kendiniz belirlediğiniz yerde ve belirlediğiniz isimde bir dizin oluşturun. Örneğin ben şöyle bir yol ve isim belirledim. /home/illedelinux

February 03, 2020

İlledelinux Debian System güncellendi

(2020-02-03 tarihinde yeniden güncellendi, bu sürümde ilk defa TV izleme seçeneğine varana kadar 30'a yakın özellikler eklendi, mevcut olanlar geliştirildi.) Video İlledelinux Debian System Geri bildirimlerden oluşan bazı düzeltmeler için kalıbı güncelledim. Gereksiz gördüğüm paketleri kaldırdım, kalıp boyutu yarı yarıya düştü. Kullanıcı seçimi için uyarladığım Package Install adlı aracı (

February 01, 2020

WireGuard, Linux Kernel 5.6’ya Eklenecek

Linus Torvalds, WireGuard’ın Linux Kernel 5.6 main line ile birleşeceğini açıklamıştı. Ardından WireGuard kurucusu bu haberi doğruladı. WireGuard: VPN tekniklerini kullanarak köprülü veya yönlendirilmiş kurulumlarda güvenli P2P bağlantıları oluşturmak için geliştirilmiş açık kaynak kodlu bir uygulama ve protokol. WireGuard’ın 5.6 sürümüne eklenecek olması aslında şaşırtıcı değil. Çünkü

January 31, 2020

25-26 Ocak 2020 – Ücretsiz Open Source ve Linux Administrator Eğitim Etkinliği Hakkında

Sosyal sorumluluk çerçevesinde üstlendiğim projem ve aynı zamanda Opensource dünyası ile beraber Linux sistemlerin öneminin, gerekliliğinin, farkındalığının oluşturulması, sektörde çalışan ya da öğrenci genç arkadaşlara yön vermek, farkındalık oluşturmanın yanısıra bilgi ve becerileri arttırmak amacıyla ücretsiz olarak Linux Administrator eğitimini 25-26 Ocak 2020 tarihlerinde gerçekleştirdim. Eğitimi RedHat ve LPI içeriklerinden derleyerek, RedHat/CentOS ve Ubuntu/Debian dağıtımlarını...

Continue Reading

January 29, 2020

LibreOffice 6.4 Kararlı Sürümü Çıktı

The Document Foundation, 2020 yılın ilk sürümü olan LibreOffice 6.4 (performans odaklı) sürümünün çıktığını duyurdu. Bu yeni sürümle birlikte daha iyi performans, özellikle sunumları ve tabloları açma ve kaydetme ve DOCX, XLSX, PPTX dosyalarıyla uyumluluk çok iyi hale getirildi. LibreOffice 6.4’teki yeniliklerden bir diğeri de “QR  Kod Oluşturucu” özelliği. Peformansın geliştirmelerinin

Linux Kernel 5.5 Sürümü “Kleptomaniac Octopus” Yayınlandı

Linux Kernel 5.5 sürümünde, Raspberry Pi 4/BCM2711, çeşitli performans değişiklikleri, NVMe sürücü sıcaklıklarını raporlama desteği, yeni bir kernel sürücüsü, içerik koruma için AMD HDCP desteği, Chromebook’lar için sesle uyanma desteği, Btrfs için yeni RAID 1 modları ve daha fazlası bulunuyor. Linux Kernel 5.5'teki Yenilikler Neler? Raven Ridge ve daha yenisi için AMDGPU HDCP desteği POWER

January 24, 2020

Rocket League, Linux Desteği Bitiyor

Arabalarla oynanan futbol oyununu Rocket League’in geliştiricisi ve yayıncısı Psyonix, macOS ve Linux versiyonlarının desteğinin biteceğini kısa bir mesajla duyurdu. Yapılan duyuruda Rocket League’in yeni teknolojilerle güncellenmeye devam edeceğini, bu nedenle macOS ve Linux platformları için destek vermenin daha zor olduğu söylendi. Rocket League’in son yaması, oyunun macOS ve Linux

January 17, 2020

Gksu yerine gnusu kullanın

Daha önceleri root olarak işimizi kolaylaştıran gksu yazılımını kullanıyorduk, malumunuz üzere bazı güvenlik önlemleri nedeniyle gksu kaldırıldı. Onun yerine pkexec temeliyle çalışan, adını gnusu koyduğum ve kendi buluşum olan gnusu kullanabilirsiniz. gnusu pkexec temelli çalıştığı için güvenlik sorunu yoktur, zira dağıtım geliştiricileri de pkexec kullanmamızı öneriyorlar. Sisteminizde root

January 11, 2020

Debian 10’a 5.3 Linux çekirdeği nasıl yüklenir?

Bilindiği gibi, Debian 10, 4.19 Linux çekirdeği ile birlikte geliyor. Ancak siz, Linux çekirdeğinin en son kararlı sürümü olan 5.3 sürümünü edinmek isteyebilirsiniz. Böyle bir şeyi gerçekleştirmek için yapmanız gereken ilk iş Backports deposunu etkinleştirmek olacaktır. Debian Backports deposunu etkinleştirmek için /etc/apt/sources.list dosyası üzerinde işlem yapmak gerekiyor. Düzenleme yapmadan önce /etc/apt/sources.list dosyasının bir yedeğini almak oldukça akıllaca olacaktır. Bunun için sudo cp /etc/apt/sources.list /etc/apt/sources.list.yedek komutuyla dosyanın bir yedeğini alalım. Artık /etc/apt/sources.list dosyasını düzenleyebiliriz. Bunun için Nano metin editörünü kullanacağız. Sudo kullanmıyorsanız, terminal üzerinde su komutunu da kullanabilirsiniz.

sudo nano -w /etc/apt/sources.list

komutuyla /etc/apt/sources.list dosyasını açabilirsiniz. Sudo kullanmıyorsanız, terminal üzerinde su komutunu, ardından parolanızı girerek root olabilir, sonra;

nano -w /etc/apt/sources.list

komutuyla /etc/apt/sources.list dosyasını açabilirsiniz.Ardından;

# Debian Buster Backports.

Ardından, Debian 10 Buster için yeni backports deposunu ekleyelim:

deb http://deb.debian.org/debian buster-backports main

Depo satırını ekledikten sonra Nano’yu kaydedip kapatmamız gerekiyor. BUnun için sırasıyla Ctrl + O ve Ctrl + X tuşlarına basıyoruz. Artık depoları güncelleyerek kullanıma hazır hale getirmemiz gerek. Aşağıdaki komutu çalıştırın:

sudo apt update

Debian Backports sistemde etkinleştirildiğine göre, artık yazılım depolarında 5.3 Linux çekirdeğini bulabilmemiz gerekiyor. Şu komutu girelim:

apt search linux-image

Yukarıdaki komutu çalıştırınca terminalde Debian 10 kararlı sürüm için kullanılabilen Linux çekirdeğinin çeşitli sürümlerini görebilirsiniz.

Alternatif olarak, apt araması çalıştırılabilir ya da grep komutuyla “buster-backports” a filtre uygulanabilir.

apt search linux-image | grep buster-backports

Arama sonuçlarının içinde, 5.3 Linux çekirdeğinin iki varyasyonunu görebileceksiniz.

linux-image-5.3.0-0.bpo.2-amd64

linux-image-5.3.0-0.bpo.2-cloud-amd64

Debian 10 kararlı sürümünü bir masaüstü veya dizüstü bilgisayarda çalıştırıyorsanız, linux-image-5.3.0-0.bpo.2-amd64 paketini yüklemeniz uygun olacaktır; çünkü bu sürüm, sistemin çalışması için gereken çeşitli masaüstü Linux sürücülerinin tümünü içerir. Ancak, bir sunucuda Debian kullanıyorsanız, linux-image-5.3.0-0.bpo.2-cloud’u kurmanız daha uygun olabilir. Artık Debian 10’a 5.3 Linux çekirdeğini kendi yazılım depolarından yükleyebiliriz. Bunun için apt komutunu kullanacağız:

sudo apt install linux-image-5.3.0-0.bpo.2-amd64

Ayrıca linux-headers yüklediğinizden de emin olmalısınız:

sudo apt install linux-headers-5.3.0-0.bpo.2-amd64

Debian sunucu üzerinde Linux Kernel 5.3’ü çalıştırmanız mı gerekiyor? İlk olarak, 5.3 bulut çekirdeğine mi yoksa 5.3 masaüstü çekirdeğine mi ihtiyacınız olduğunu belirlemeniz gerekir. Bunun için:

sudo apt install linux-resim-5.3.0-0.bpo.2-bulut-amd64

ya da

sudo apt install linux-image-5.3.0-0.bpo.2-amd64

komutlarından birini verebilirsiniz. Yüklediğiniz sürüme göre linux-headers yüklediğinizden de emin olmalısınız. Bunun için:

sudo apt install linux-headers-5.3.0-0.bpo.2-cloud-amd64

ya da

sudo apt install linux-headers-5.3.0-0.bpo.2-amd64

komutlarını verebilirsiniz. Yükleme işlemini tamamladıktan sonra Debian 10 sistemi yeniden başlatın. ve terminalde:

uname -a

komutunu çalıştırın. Yeni çekirdeğinizi görebilirsiniz:

January 08, 2020

RestAPI ile Ovirt(Open Virtualization Manager)’da Snapshot Tabanında Otomatik Backup/Restore Operasyonu

Konu ile ilgili daha önceden Python ile yazılmış online full backup script’ini kullanarak bu işlemi gerçekleştireceğiz. https://github.com/wefixit-AT/oVirtBackup adresinden bu script’i indirebilirsiniz. BACKUP Tek yapmamız gereken bu tool’u kendi ortamımıza göre uyarlayıp, daha sonro crontab ile otomatik hale getirmek. Şimdi aşağıdaki adımlarla bu işlemin nasıl yapılacağına bir bakalım. İlk olarak oVirt ortamına yedeklemi işlemini tetikleyeceğimiz(python script’inin...

Continue Reading

January 05, 2020

How to secure your internet activity with Linux system and VPN

Hackers can access, steal and sell your online activity data as well as manipulate it if you don’t use the right system and tools. The level of protection you want will largely influence which tools and systems to use. With a Linux system and VPN, it becomes possible to hide your browsing tracks, personal information, and various other online activities. When you have the right protection in place, not even the government can access your activity. Keep reading to learn how businesses and individuals alike can use a Linux system and VPN for ongoing protection of their online data. We will also explore why this is important and why you should care about your online data. Hackers steal data for a number of reasons. Sometimes, it’s’ for their own purposes. Other times, they sell it or give it to other entities. These entities may or may not have known about the data collection processes the hackers use to gather the data.

What Is a VPN?

VPN stands for virtual private network, which means it provides encryption, making it difficult for the bad guys to steal your data when visiting their sites. It also acts as an added layer of protection against the government from tracking your online activity.

In some areas, a VPN even grants users access to certain content that is not normally available in their geographical areas. Such forms of content often include video, international gaming, certain servers, etc.

The VPN works to protect your online activity by making it appear as if you are logged in from a different location. As soon as you connect to the VPN, you can set your location to anywhere in the world.

Additionally, with a Linux system, you can improve the safety and protection of your data thanks to advanced security measures. Fixes for Linux program exploits made by hackers are generally developed and released well before other operating systems develop and release fixes for their equivalent programs.

How Does a Linux System Make Online Activity More Secure?

Getting fixes to exploits is of the utmost importance in both personal and business settings, particularly those sitting on large amounts of data.

Hackers, crackers, and phreakers steal people’s online data all the time for multiple reasons. Some do it to fight a cause, some steal it unintentionally, some do it for fun, a few do it for commercial espionage or sabotage, and lastly, it’s not uncommon for disgruntled employees to steal data for whistle blowing purposes.

A Linux system helps avoid several types of attacks:

  • Reading data
  • Denial of service
  • Altering/manipulating data
  • Access to system

Tips for Increasing Data Protection With a Linux System

To increase data protection through the use of a Linux system, you must first pinpoint what you mean by “secure.” To do this, you must assess what you intend to do with the system and just how secure you need the data to be. In most cases, Linux systems need security, at a minimum, in the following areas:

  • Authorization: Do not allow ANYONE access to the system unless they NEED access
  • Verification: Make sure users have to go through a 2-step authentication process to verify their identity each time logging into the system
  • Integrity: All personal information must be protected and NEVER compromised
  • Non-repudiation: Must have proof of receipt of data; official receipt showing how you received the data and from whom
  • Privacy/confidentiality: You must abide by any privacy and confidentiality regulations such as the ISO 7984-2 International Standards Organization Security Standard
  • Availability: System must be able to perform its required function at all times during normal operating hours while maintaining security around the clock.

Choose a Native App

When installing a VPN on a Linux system, you will have two options: Open-source or native app. With a native app, you will get access to more features and less required configuration.

Because of this, it is highly suggested that any VPN you use at least comes in the form of a native client for Linux.

In addition to the dedicated app, users of a VPN that comes in the form of a native client enjoy sophisticated security, ultra-fast speeds, and the ability to run on a command-line interface. Additionally, the server list is always kept up to date, making it simple to download and switch between UDP to TCP over the Open VPN protocol.

Run Through Services and Customize Each of Them

When using Linux as a VPN, you will have several types of facilities to choose from, including mail and WWW. Linux handles some of these services through a system of ports.

Take for example Port 21, which controls FTP. You can check out service names in the /etc/services file for a map of port numbers.

It’s ideal to have most of your services running through a configuration file /etc/inetd.conf. You’ll also want to take a lot of time when running through this type of file as you will have the ability to customize how each of the available services is running and protected.

Keep Services in inetd.conf Turned OFF

Check the services in inetd.conf, and make sure they are not set to turn on by default. To achieve maximum security, you must turn them off. You can type the command netstat -vat to see which services are currently running on your Linux or alternatively, you can use ss command. For any services that you are unfamiliar with, make sure to look them up in /etc/inetd.conf.

Final Thoughts

There are numerous VPNs to choose from. The surfshark.com VPN is especially ideal for those who want to unblock lots of region-locked content from sources such as Netflix, Amazon Prime Video and Hulu.

Users of this VPN are also huge fans of their ability to connect to the VPN through an unlimited number of devices. This is an example of a VPN that has the features you should look for when researching for ways to use a Linux system to secure internet activity.

Anonymized Data Is Not Anonymous

We have all more or less accepted that we are living in some kind of dime-store George Orwell novel where our every movement is tracked and recorded in some way. Everything we do today, especially if there’s any kind of gadget or electronics involved, generates data that is of interest to someone. That data is constantly being gathered and stored, used by someone to build up a picture of the world around us. The average person today is much more aware of the importance of their own data security. We all understand that the wrong data in the wrong hands can be used to wreak havoc on both individuals and society as a whole. Now that there is a much greater general awareness of the importance of data privacy, it is much more difficult for malicious actors to unscrupulously gather sensitive data from us, as most people know not to hand it over.

Data Protection Laws

In most jurisdictions, there are laws and regulations in place that govern how personal data can be collected, stored, shared, and accessed.

While these laws are severely lacking in a number of areas, the trend in recent years has been to increasingly protect individuals from corporate negligence and excess, which has been welcomed by most consumers.

Probably the best-known data protection law is the famed GDPR, or the General Data Protection Regulation which came into force in 2018. Though in theory it has power only within the EU, in practice the law applies to every company that deals with EU citizens.

Its strict privacy requirements have made many businesses reconsider how they handle data, threatening misbehavers with fines that can climb into billions of euros (up to 4% of the company’s annual turnover).

Unlike the EU, the US has no single regulation on the federal level to protect the data of its citizens. Acknowledging that, some states have released their own privacy laws.

Probably the most extensive of them to date is the CCPA, or the California Consumer Privacy Act.

The act will come into power beginning with 2020 and grant the citizens of California many of the same rights that EU citizens have come to enjoy.

It will allow Californians to know what data is collected about them, where it is used, say no to selling their data, and request to delete it.

Anonymized Data

One common theme that has emerged in the regulations from different jurisdictions is the notion of anonymized data. As the name implies, this is data that cannot be tied to a specific individual.

A set of anonymized data might be presented as belonging to a particular individual, but the identity of the subject is not revealed in the data.

Data anonymization presents an attractive common ground between the rights of consumers and those that want to make use of their personal data.

After all, information about who we are and what we do has long been the driving force behind many of today’s largest companies, including Google, Facebook, and Amazon.

But private corporations are not the only beneficiaries of our data. Removing any personally identifiable information from a dataset and anonymizing it, researchers are able to work with large and detailed datasets that contain a wealth of information without having to compromise any individual’s privacy.

By anonymizing data, we are also able to encourage people to share data that they would otherwise hold on to. Businesses and governments can access and trade vast amounts of data without infringing anyone’s privacy, thanks to anonymization.

Meanwhile, users don’t have to worry about data they generate being recorded and revealing information about them personally.

Data Anonymization Techniques

There are many ways to anonymize data, varying in cost and difficulty.

Perhaps the easiest technique is simply to remove some of the user’s direct identifiers. This is basically your main personal information. For instance, an insurance company could delete a customer’s name, date of birth, and call the data as good as anonymized.

Another method is to generalize the data of multiple users to reduce their precision. For instance, you could remove the last digits of a postcode or present a person’s age in a range rather than the exact number.

It is one of the methods Google uses to achieve k-anonymity – this elaborate term simply means that a certain number of people (defined by the letter k) should share the same property, such as ZIP code.

One more way is to include noise into the dataset. By noise I mean swapping around the information about certain properties between individuals or groups.

For example, this method could switch your car ownership details with another person. Your profile would change, but the whole dataset would remain intact for statistical analysis.

Finally, you can further protect the anonymized data you need to share by sampling it – that is, releasing the dataset in small batches. In theory, sampling helps to reduce the risk of re-identification.

Even if the data is enough to identify you as an individual, statistically there should be at least several other people with the same characteristics as you. Without having the whole dataset, there is no way to tell which person it really is.

Other data anonymization techniques exist, but these are some of the main ones.

Deanonymization

So, anonymization makes everyone a winner, right? Well, not quite.

Anyone who has worked extensively with data can testify as to just how little information is needed to identify a specific individual out of a database of many thousands.

One of the consequences of the massive volumes of data that now exists on all of us is that different data sources can be cross-referenced to identify common elements.

In some cases, this cross-referencing can instantly deanonymize entire data sets, depending on how exactly they have been anonymized.

Researchers were able to recover surnames of US males from a database of genetic information by simply making use of publicly available internet resources.

A publicly available dataset of London’s bike-sharing service could be used not only to track trips but also who actually made them.

Anonymized Netflix movie ratings were mapped to individuals by cross-referencing them with IMDB data, thus revealing some very private facts about users. These are only a few of the many similar examples.

Since the introduction of the GDPR, a number of businesses have been looking for ways of continuing to handle large volumes of customer data without falling afoul of the new regulations.

Many organizations have come to view anonymized datasets as a means of potentially circumventing the regulations. After all, if data isn’t tied to specific individuals, it can’t infringe on their privacy.

No Such Thing as Anonymous

According to new research conducted by researchers from Imperial College London, along with their counterparts at Belgium’s Université Catholique de Louvain, it is incredibly hard to properly deanonymize data.

In order for data to be completely anonymous, it needs to be presented in isolation. You can use VPN or change your IP address (more information about proxy servers you can find on Proxyway), etc.

If enough anonymized data is given about an individual, all it takes is a simple cross-reference with other databases to ascertain who the data concerns.

Using their own prediction model, the researchers made a startling discovery: it would take only 15 pieces of demographic information to re-identify 99.98% of Americans.

What is more, only four base attributes (ZIP code, date of birth, gender, and number of children) would be needed to confidently identify 79.4% of the entire state of Massachusetts. According to the study, releasing data in small samples is not enough to protect an individual from detection.

Bearing in mind that researchers can deanonymize the records of an entire state, data brokers like Experian are selling anonymized data sets that contain hundreds of data points for each individual.

According to the researchers’ work, this data is anonymized in name only and anyone with the capacity to handle large datasets also has the resources to easily deanonymize them.

It doesn’t matter what methods are used to anonymize data. Even the more advanced techniques like k-anonymity might not be sufficient – not to mention that they are expensive.

In most cases, all that happens is that only immediately identifiable data like names and addresses are removed. This is far from enough.

The researchers’ findings urge us not to fall into a false sense of security. They also challenge the methods companies use to anonymize data in light of the strict regulatory requirements set forth by the GDPR and the forthcoming CCPA.

Wrap-Up

The long battle to get the average internet user to care about their data and privacy has been a tiring one. Anyone who has worked in cybersecurity over the last couple of decades can testify as to how much things have improved, but there is still a long way to go.

The notion that people’s data can be anonymized and rendered harmless is both incorrect and dangerous. It is important that people properly understand the implications of handing their data over. Don’t give up your data under the false impression that it can’t be tied to you.

December 30, 2019

Everything There Is To Know About Online Security

Online security is a major topic of discussion nowadays, with so many threats to your privacy (and even livelihood in some cases). Thanks to the ever-changing nature of technology, those dangers evolve right alongside it. So, while a truly “complete” guide isn’t achievable, we’ve done our best to cover all bases. Note that stuff like “use an antivirus” and “always update your software” should be common sense by now – so we won’t hammer on about those. HTTPS is the Secured version of the HyperText Transfer Protocol (HTTP) that lets you view pages in the first place. It uses SSL/TLS encryption to make sure the connection between you and the websites you browse remains private, including any passwords and sensitive data you transmit.

The Basics: HTTPS

HTTPS is the Secured version of the HyperText Transfer Protocol (HTTP) that lets you view pages in the first place. It uses SSL/TLS encryption to make sure the connection between you and the websites you browse remains private, including any passwords and sensitive data you transmit.

Despite all this fancy phrasing, it’s as simple as using websites that have a (usually) green padlock next to the address bar.

You don’t need to go to extreme lengths to have some basic protection. Just use HTTPS websites exclusively and you already have your first line of defense.

There’s even a browser add-on called HTTPS Everywhere from the Electronic Frontier Foundation that attempts to force an HTTPS connection where possible.

Websites that don’t use HTTPS are punished in search result rankings by Google, while Mozilla has been phasing out features for non-secure websites. All of this is an orchestrated effort by such organizations to encrypt the entire Internet and make it safer to browse.

Obviously, companies like Google don’t have the best track record when it comes to your online privacy – but we can appreciate them doing some good on every once in a while.

Their business model relies primarily on advertisements and mass data collection, so let’s see look at how those can affect you.

Ads Can Get You in Trouble

Let’s be honest, nobody really ‘likes’ ads – but we do love supporting content creators in any way we can. Don’t be in such a hurry to disable your ad-blocker on your favorite news site or while watching YouTube, though.

Why? Well, just take a look at what happened in 2016 to such major sites as the New York Times, BBC, and the NFL. In short, their ads contained a strain of ransomware that encrypted the victims’ hard drives in exchange for a Bitcoin ransom.

Keep in mind: these aren’t just some sketchy websites where you’d expect malware from a mile away.

The major stinger is that people didn’t even need to click the ads for the attack to happen, according to Malwarebytes. Sure, the targeted people had out of date software with security holes – but who’s to say when an “updated” program will be hit next?

If you haven’t already, be sure to get a good ad-blocking extension for your browser. Maybe a script-blocker as well, considering the number of malicious JavaScript attacks out there. A couple of great recommendations in the section below.

uBlock Origin and uMatrix

This duo of browser add-ons is a godsend to anyone who despises ads, pop-ups, auto-playing videos, and any other Internet nuisances.

They were both created by Raymond Hill, who not only works on and provides them for free, but he explicitly won’t accept donations of any kind.

Performance-wise, uBlock Origin (uBO) was benchmarked against AdBlock Plus (ABP) and it’s pretty clear who the winner is. Moreover, it has no “acceptable ads” program like ABP, where advertisers pay them to whitelist their ads.

Depending on which filter lists you use (and there are plenty of them), uBO will also block ad tracking scripts that, well, track your browsing habits.

uMatrix has much of the same functionality, though it also allows you to block anything a website might throw at you:

  • Cookies
  • Audio and video media, and even images
  • Scripts, XHR, CSS elements, and frames

The fact that it stops requests from the domains you blacklist, across all websites, means you can get around Facebook’s “unavoidable” tracking.

You know; the thing that knows your browsing habits even if you don’t have a Facebook account – just because a page has a Like/Share button. Just a neat example of how to use uMatrix to preserve your privacy.

As a word of warning, this extension is geared towards advanced users. Don’t worry though; once you use it for several websites it’ll become second nature.

Everyone’s out for Your Data

We wish this was an exaggeration, but just look at how many people want your browsing habits for various reasons:

  • Internet Service Providers have been selling your browsing and location data for a profit
  • Government surveillance is at an all-time high, and more people are recognizing it since the Snowden revelations in 2013
  • Hacker numbers are increasing, with over 4 billion records exposed in the first half of 2019 alone
  • Almost 80% of websites have some form of ad tracking installed (which you can block with the previously mentioned add-ons)

It’s no wonder that nearly 25% of total Internet users use a Virtual Private Network (VPN) nowadays. If you’re not up to speed, a VPN encrypts (i.e. obfuscates) your data, making it unreadable to anybody who does not have the cryptographic key.

This means none of the four “usual suspects” above can see what you’re doing online. Moreover, any sensitive operations such as online banking, payments, and logging in to various services will be safe from hacking attempts.

On a minor downside, using a VPN tends to slow down your connection due to multiple factors – the distance between you and the server, the encryption/decryption process depends on your CPU power, and so on.

Fortunately, a super-fast VPN like ExpressVPN can help alleviate that. Since they have servers in 94 countries, it’s super easy to find one close to you – even when traveling abroad.

Free Wi-Fi = Free Hackers

Speaking of traveling – everyone loves using free Wi-Fi, especially on vacation. But have you ever noticed that your local café or that hotel you were staying at had two networks with the exact same name? Then you’ve most likely had an encounter with “Evil Twin” Wi-Fi hotspots.

Basically, hackers rely on peoples’ excitement for free stuff, so they create their own hotspots that mimic the real thing. Once you’re connected, your data is as good as stolen. Unless you use a VPN to encrypt it before leaving your device, that is.

In fact, this method was recommended by the Wi-Fi Alliance itself, since cyber criminals make it next to impossible to distinguish between a legitimate hotspot and a fake one. They even go as far as using the same SSID name and cloning the MAC address of the network.

Using a VPN is also a good idea even if you’re 100% sure that you’re connecting to the real thing, and the network is password-protected.

The reason being that both WPA2 and WPA3 (the current and latest Wi-Fi encryption protocols) suffer from security exploits that even an average-level hacker can profit from.

Take Care of Your Passwords

You wouldn’t think “password” would break the top 5 most common passwords, but it does. The top one is “123456” just for comparison. Your takeaway from here should be: never use weak passwords for your accounts. Oh, and don’t re-use them for others either.

Use a good password manager to help you create and store strong passwords that can’t be brute-forced in 5 minutes by a bored teenager and a video tutorial. As a side benefit, using a pass manager helps you avoid phishing scams.

Here’s how it goes down:

  • Cybercriminals create a fake website that mimics legitimate services (PayPal, home banking, etc.)
  • They send you an email saying you need to update your info and provide a link to their fake site
  • Then they wait for you to type in your login info willingly

Fortunately, your password manager literally won’t input your login details because it can’t recognize the website as the correct one. Hackers are pretty crafty with their fakes nowadays, but this way they can’t rely on human error for their schemes.

Multi-Factor Authentication (MFA)

Many of these hacking attempts can be stopped in their tracks by simply having SMS two-factor authentication (2FA) enabled. It’s not the best choice, but as many security guides will tell you: “it’s better than nothing.”

The better option is to use an authenticator app such as Authy, Google Authenticator, and others. There are also hardware authenticator tokens that you can just plug in your USB or hold against your phone for the same effect.

Watch Out for Voicemail

What does voicemail have to do with online security? A lot, as it turns out. Since many people don’t bother to secure their voicemail account with a long password, hackers can simply use a brute-force attack to gain access to it.

Then, by using the password reset function on your accounts, they can ask for the reset tokens to be sent through a voice call. All they must do is make sure that call never reaches you and goes to voicemail instead. Voila, your account has been hacked.

Text-based 2FA won’t protect you in this case, so the best thing to do would be to disable your voicemail entirely. You may also call your own phone carrier and ask for assistance with this issue if yours isn’t on the list.

If you really want to keep voicemail around, you need to protect it with a long random password as we mentioned. iPhone users simply need to go to Settings > Phone > Change Voicemail Password.

Use Encrypted Email Services

We’ve mentioned Google’s anti-privacy practices in the beginning. And while they say they’ve stopped reading your emails, the Wall Street Journal says otherwise. Practices of this kind are all fairly well documented for these big tech giants – there’s no secret here.

So if you don’t like your private life spied on by some poorly paid contractor somewhere, consider switching to an encrypted email provider.

Since your emails are encrypted, not even the providers themselves can read them. Even if hackers somehow breached their databases, all they’d find is undecipherable gibberish.

ProtonMail is a good choice, but there are plenty of others out there if you need something different. Ultimately, they all allow you to keep your business between you and the recipient.

Dealing with Social Media

There is no expectation of privacy on social media. Don’t look at us – those words were from Facebook counsel Orin Snyder. While that’s a heavy-handed way of putting it, it’s 100% true.

The only logical way of dealing with your social accounts (if you need online privacy and security) is to delete them.

If you need to keep them for whatever reason, you can at least control how much data they have on you. To avoid being a victim to the next Cambridge Analytica, these are your only two options. Now, you can make it easier to clean up your socials with a couple of apps.

The first one is Jumbo for iOS and Android. Not only can it set all your privacy settings on most services to “maximum” without collecting any data, but it can also delete your Tweets (3200 at a time; that’s a Twitter limitation), old Facebook posts, and even Amazon Alexa recordings.

Another one is MyPermissions, which allows you to see what apps you’ve connected to your Facebook, Twitter, and other accounts.

They can be viewed, removed, and reported (if you find anything fishy) in a single interface. You can also change the data access privileges on the apps if you intend to keep them.

Don’t want yet another phone app? Social Post Book Manager (Chrome extension) and TweetDelete are great alternatives to delete those embarrassing college posts.

Linux find command tutorial (with examples)

When it comes to locating files or directories on your system, the find command on Linux is unparalleled. It’s simple to use, yet has a lot of different options that allow you to fine-tune your search for files. Read on to see examples of how you can wield this command to find anything on your system. Every file is only a few keystrokes away once you know how to use the find command in Linux. You can tell the find command to look specifically for directories with the -type d option. This will make find command only search for matching directory names and not file names. Since hidden files and directories in Linux begin with a period, we can specify this search pattern in our search string in order to recursively list hidden files and directories.

Find a directory

You can tell the find command to look specifically for directories with the -type d option. This will make find command only search for matching directory names and not file names.

find /path/to/search -type d -name "name-of-dir"

Find directory

Find hidden files

Since hidden files and directories in Linux begin with a period, we can specify this search pattern in our search string in order to recursively list hidden files and directories.

find /path/to/search -name ".*"

Find files of a certain size or greater than X

The -size option on find allows us to search for files of a specific size. It can be used to find files of an exact size, files that are larger or smaller than a certain size, or files that fit into a specified size range. Here are some examples:

Search for files bigger than 10MB in size:

find /path/to/search -size +10M

Search for files smaller than 10MB in size:

find /path/to/search -size -10M

Search for files that are exactly 10MB in size:

find /path/to/search -size 10M

Search for files that are between 100MB and 1GB in size:

find /path/to/search -size +100M -size -1G

Find from a list of files

If you have a list of files (in a .txt file, for example) that you need to search for, you can search for your list of files with a combination of the find and grep commands. For this command to work, just make sure that each pattern you want to search for is separated by a new line.

find /path/to/search | grep -f filelist.txt

The -f option on grep means “file” and allows us to specify a file of strings to be matched with. This results in the find command returning any file or directory names that match those in the list.

Find not in a list

Using that same list of files we mentioned in the previous example, you can also use find to search for any files that do not fit the patterns inside the text file. Once again, we’ll use a combination of the find and grep command; we just need an additional option specified with grep:

find /path/to/search | grep -vf filelist.txt

The -v option on grep means “inverse match” and will return a list of files that don’t match any of the patterns specified in our list of files.

Set the maxdepth

The find command will search recursively by default. This means that it will search the specified directory for the pattern you specified, as well as any and all subdirectories within the directory you told it to search.

For example, if you tell find to search the root directory of Linux (/), it will search the entire hard drive, no matter how many subdirectories of subdirectories exist. You can circumvent this behavior with the -maxdepth option.

Specify a number after -maxdepth to instruct find on how many subdirectories it should recursively search.

Search for files only in the current directory and don’t search recursively:

find . -maxdepth 0 -name "myfile.txt"

Search for files only in the current directory and one subdirectory deeper:

find . -maxdepth 1 -name "myfile.txt"

Find empty files (zero-length)

To search for empty files with find, you can use the -empty flag. Search for all empty files:

find /path/to/search -type f -empty

Search for all empty directories:

find /path/to/search -type d -empty

It is also very handy to couple this command with the -delete option if you’d like to automatically delete the empty files or directories that are returned by find.

Delete all empty files in a directory (and subdirectories):

find /path/to/search -type f -empty -delete

Find largest directory or file

If you would like to quickly determine what files or directories on your system are taking up the most room, you can use find to search recursively and output a sorted list of files and/or directories by their size.

How to show the biggest file in a directory:

find /path/to/search -type f -printf "%s\t%p\n" | sort -n | tail -1

Notice that the find command was sorted to two other handy Linux utilities: sort and tail. Sort will put the list of files in order by their size, and tail will output only the last file in the list, which is also the largest.

You can adjust the tail command if you’d like to output, for example, the top 5 largest files:

find /path/to/search -type f -printf "%s\t%p\n" | sort -n | tail -5

Alternatively, you could use the head command to determine the smallest file(s):

find /path/to/search -type f -printf "%s\t%p\n" | sort -n | head -5

If you’d like to search for directories instead of files, just specify “d” in the type option. How to show the biggest directory:

find /path/to/search -type d -printf "%s\t%p\n" | sort -n | tail -1

Find setuid set files

Setuid is an abbreviation for “set user ID on execution” which is a file permission that allows a normal user to run a program with escalated privileges (such as root).

This can be a security concern for obvious reasons, but these files can be easy to isolate with the find command and a few options.

The find command has two options to help us search for files with certain permissions: -user and -perm. To find files that are able to be executed with root privileges by a normal user, you can use this command:

find /path/to/search -user root -perm /4000

Find suid files

In the screenshot above, we included the -exec option in order to show a little more output about the files that find returns with. The whole command looks like this:

find /path/to/search -user root -perm /4000 -exec ls -l {} \;

You could also substitute “root” in this command for any other user that you want to search for as the owner. Or, you could search for all files with SUID permissions and not specify a user at all:

find /path/to/search -perm /4000

Find sgid set files

Finding files with SGID set is almost the same as finding files with SUID, except the permissions for 4000 need to be changed to 2000:

find /path/to/search -perm /2000

You can also search for files that have both SUID and SGID set by specifying 6000 in the perms option:

find /path/to/search -perm /6000

List files without permission denied

When searching for files with the find command, you must have read permissions on the directories and subdirectories that you’re searching through. If you don’t, find will output an error message but continue to look throughout the directories that you do have permission on.

Permission denied

Although this could happen in a lot of different directories, it will definitely happen when searching your root directory.

That means that when you’re trying to search your whole hard drive for a file, the find command is going to produce a ton of error messages.

To avoid seeing these errors, you can redirect the stderr output of find to stdout, and pipe that to grep.

find / -name "myfile.txt" 2>%1 | grep -v "Permission denied"

This command uses the -v (inverse) option of grep to show all output except for the lines that say “Permission denied.”

Find modified files within the last X days

Use the -mtime option on the find command to search for files or directories that were modified within the last X days. It can also be used to search for files older than X days, or files that were modified exactly X days ago.

Here are some examples of how to use the -mtime option on the find command:

Search for all files that were modified within the last 30 days:

find /path/to/search -type f -mtime -30

Search for all files that were modified more than 30 days ago:

find /path/to/search -type f -mtime +30

Search for all files that were modified exactly 30 days ago:

find /path/to/search -type f -mtime 30

If you want the find command to output more information about the files it finds, such as the modified date, you can use the -exec option and include an ls command:

find /path/to/search -type f -mtime -30 -exec ls -l {} \;

Sort by time

To sort through the results of find by modified time of the files, you can use the -printf option to list the times in a sortable way, and pipe that output to the sort utility.

find /path/to/search -printf "%T+\t%p\n" | sort

This command will sort the files older to newer. If you’d like the newer files to appear first, just pass the -r (reverse) option to sort.

find /path/to/search -printf "%T+\t%p\n" | sort -r

Difference between locate and find

The locate command on Linux is another good way to search for files on your system. It’s not packed with a plethora of search options like the find command is, so it’s a bit less flexible, but it still comes in handy.

locate myfile.txt

The locate command works by searching a database that contains all the names of the files on the system. The database that it searches through is updated with the upatedb command.

Since the locate command doesn’t have to perform a live search of all the files on the system, it’s much more efficient than the find command. But in addition to the lack of options, there’s another drawback: the database of files only updates once per day.

You can update this database of files manually by running the updatedb command:

updatedb

The locate command is particularly useful when you need to search the entire hard drive for a file, since the find command will naturally take a lot longer, as it has to traverse every single directory in real-time.

If searching a specific directory, known to not contain a large number of subdirectories, it’s better to stick with the find command.

CPU load of find command

When searching through loads of directories, the find command can be resource-intensive. It should inherently allow more important system processes to have priority, but if you need to ensure that the find command takes up fewer resources on a production server, you can use the ionice or nice command.

Monitor CPU usage of the find command:

top

Reduce the Input/Output priority of find command:

ionice -c3 -n7 find /path/to/search -name "myfile.txt"

Reduce the CPU priority of find command:

nice -n 19 find /path/to/search -name "myfile.txt"

Or combine both utilities to really ensure low I/O and low CPU priority:

nice -n ionice -c2 -n7 find /path/to/search -name "myfile.txt"

I hope you find the tutorial useful. Keep coming back.

December 29, 2019

Raspberry Pi ve sSMTP

Merhabalar.

Kimilerimiz evlerimizde veya işyerlerimizde raspbery pi kullanmaya başladı.
Yahut ta bir VPS (sanal sunucu) sahibisinizdir. Bu sunucu Şirketiniz bünyesinde bulunuyor olabilir.
Her nekadar sanal da olsa veya fiziksel bir sunucunuz olsun sisteminizi takip etmelisiniz.
Elbette devamlı surette sunucunuza bağlı kalmanız mümkün değildir.

Sunucunuzun size acil durumlarda ve cron görevleri, sisteme login gibi durumlarda  e-posta gönderebilmelidir, değilse sizin hiç bir şeyden haberiniz olamaz.

Bunun için öncelikle bir mail hesabınızın olması gereklidir, gmail olabilir yandex olabilir.
yandex olmasını öneririm. Getgnu.org sunucusunun e-postalarını 5 yıldır problemsizce gönderiyor 🙂

Tabi ki sunucunuzda e-posta gönderebilmek için bazı paketler kurmanız gerekmektedir.

Yandex üzerinden gönderebilmek için ssmtp paketi gereklidir.
ssmtp bize varolan smtp hesabı üzerinden e-posta göndermemizi sağlayacak.

Tabiki Mail yazabilmek için mailx gereklidir.
Bakalım paket depomuzda mailx varmı.

root@rihanna ~ # apt-cache search mailx
.
heirloom-mailx - feature-rich BSD mail(1)
mailutils - GNU mailutils utilities for handling mail
root@rihanna ~ #

UYARI!
Eğer sunucunuzda bir MTA (postfix, qmail, exim, sendmail) kurulu ise ssmtp kurulumunu yapmayın !
ssmtp kurulumu ile varolan mail MTA’ya zarar verebilirsiniz.     Sorumluluk Kabul edilmez.

 

root@rihanna ~ # apt-get install -y heirloom-mailx ssmtp

Kurulum tamanlandıktan sonra

root@rihanna:~# nano /etc/ssmtp/ssmtp.conf

bu içeriği düzenleyip yapıştırın

# Config file for sSMTP sendmail
#
# The person who gets all mail for userids < 1000
# Make this empty to disable rewriting.
# The place where the mail goes. The actual machine name is required no
# MX records are consulted. Commonly mailhosts are named mail.domain.com
# Where will the mail seem to come from?
rewriteDomain=rihanna.FalancaFilanca.org.tr

# The full hostname
hostname=rihanna.FalancaFilanca.org.tr

# Are users allowed to set their own From: address?
# YES - Allow the user to specify their own From: address
# NO - Use the system generated From: address
#FromLineOverride=YES

mailhub=smtp.yandex.com:587
AuthUser=FalancaFilanca@yandex.com
AuthPass=Parola_Parolam
UseSTARTTLS=YES

sSMTP ayarları ve mailx kurulumu bu kadar.

Şimdi bir test -epostası gönderelim.

 

 

 

mail -s "Merhaba" kime@FalanFilan.org.tr

Selamlar.

.

Mail gönderimi bu şekildedir yazmayı bitirdikten sonra bir alt satıra geçilip . ile bitiriyoruz.

Hepsi bu kadar.

Yorum ve sorularınızı beklerim.

Mutt + Gmail

RaspBerry Pi – mailx

 

15+ examples for Linux cURL command

In this tutorial, we will cover the cURL command in Linux. Follow along as we guide you through the functions of this powerful utility with examples to help you understand everything it’s capable of. The cURL command is used to download or upload data to a server, using one of its 20+ supported protocols. This data could be a file, email message, or web page. cURL is an ideal tool for interacting with a website or API, sending requests and displaying the responses to the terminal or logging the data to a file. Sometimes it’s used as part of a larger script, handing off the retrieved data to other functions for processing. Since cURL can be used to retrieve files from servers, it’s often used to download part of a website. It performs this function well, but sometimes the wget command is better suited for that job. We’ll go over some of the differences and similarities between wget and cURL later in this article. We’ll show you how to get started using cURL in the sections below.

Download a file

The most basic command we can give to cURL is to download a website or file. cURL will use HTTP as its default protocol unless we specify a different one. To download a website, just issue this command:

curl http://www.google.com

Of course, enter any website or page that you want to retrieve.

curl basic command

Doing a basic command like this with no extra options will rarely be useful, because this only tells cURL to retrieve the source code of the page you’ve provided.

curl output

When we ran our command, our terminal is filled with HTML and other web scripting code – not something that is particularly useful to us in this form.

Let’s download the website as an HTML document instead, that way the content can be displayed. Add the –output option to cURL to achieve this.

curl www.likegeeks.com --output likegeeks.html

curl output switch

Now the website we downloaded can be opened and displayed in a web browser.

downloaded website

If you’d like to download an online file, the command is about the same. But make sure to append the –output option to cURL as we did in the example above.

If you fail to do so, cURL will send the binary output of the online file to your terminal, which will likely cause it to malfunction.

Here’s what it looks like when we initiate the download of a 500KB word document.

curl download document

The word document begins to download and the current progress of the download is shown in the terminal. When the download completes, the file will be available in the directory we saved it to.

In this example, no directory was specified, so it was saved to our present working directory (the directory from which we ran the cURL command).

Also, did you notice the -L option that we specified in our cURL command? It was necessary in order to download this file, and we go over its function in the next section.

Follow redirect

If you get an empty output when trying to cURL a website, it probably means that the website told cURL to redirect to a different URL. By default, cURL won’t follow the redirect, but you can tell it to with the -L switch.

curl -L www.likegeeks.com

curl follow redirect

In our research for this article, we found it was necessary to specify the -L on a majority of websites, so be sure to remember this little trick. You may even want to append it to the majority of your cURL commands by default.

Stop and resume download

If your download gets interrupted, or if you need to download a big file but don’t want to do it all in one session, cURL provides an option to stop and resume the transfer.

To stop a transfer manually, you can just end the cURL process the same way you’d stop almost any process currently running in your terminal, with a ctrl+c combination.

curl stop download

Our download has begun, but was interrupted with ctrl+c, now let’s resume it with the following syntax:

curl -C - example.com/some-file.zip --output MyFile.zip

The -C switch is what resumes our file transfer, but also notice that there is a dash (-) directly after it. This tells cURL to resume the file transfer, but to first look at the already downloaded portion in order to see the last byte downloaded and determine where to resume.

resume file download

Our file transfer was resumed and then proceeded to finish downloading successfully.

Specify timeout

If you want cURL to abandon what it’s doing after a certain amount of time, you can specify a timeout in the command. This is especially useful because some operations in cURL don’t have a timeout by default, so one needs to be specified if you don’t want it getting hung up indefinitely.

You can specify a maximum time to spend executing a command with the -m switch. When the specified time has elapsed, cURL will exit whatever it’s doing, even if it’s in the middle of downloading or uploading a file.

cURL expects your maximum time to be specified in seconds. So, to timeout after one minute, the command would look like this:

curl -m 60 example.com

Another type of timeout that you can specify with cURL is the amount of time to spend connecting. This helps make sure that cURL doesn’t spend an unreasonable amount of time attempting to contact a host that is offline or otherwise unreachable.

It, too, accepts seconds as an argument. The option is written as –connect-timeout.

curl --connect-timeout 60 example.com

Using a username and a password

You can specify a username and password in a cURL command with the -u switch. For example, if you wanted to authenticate with an FTP server, the syntax would look like this:

curl -u username:password ftp://example.com

curl authenticate

You can use this with any protocol, but FTP is frequently used for simple file transfers like this.

If we wanted to download the file displayed in the screenshot above, we just issue the same command but use the full path to the file.

curl -u username:password ftp://example.com/readme.txt

curl authenticate download

Use proxies

It’s easy to direct cURL to use a proxy before connecting to a host. cURL will expect an HTTP proxy by default, unless you specify otherwise.

Use the -x switch to define a proxy. Since no protocol is specified in this example, cURL will assume it’s an HTTP proxy.

curl -x 192.168.1.1:8080 http://example.com

This command would use 192.168.1.1 on port 8080 as a proxy to connect to example.com.

You can use it with other protocols as well. Here’s an example of what it’d look like to use an HTTP proxy to cURL to an FTP server and retrieve a file.

curl -x 192.168.1.1:8080 ftp://example.com/readme.txt

cURL supports many other types of proxies and options to use with those proxies, but expanding further would be beyond the scope of this guide. Check out the cURL man page for more information about proxy tunneling, SOCKS proxies, authentication, etc.

Chunked download large files

We’ve already shown how you can stop and resume file transfers, but what if we wanted cURL to only download a chunk of a file? That way, we could download a large file in multiple chunks.

It’s possible to download only certain portions of a file, in case you needed to stay under a download cap or something like that. The –range flag is used to accomplish this.

curl range man

Sizes must be written in bytes. So if we wanted to download the latest Ubuntu .iso file in 100 MB chunks, our first command would look like this:

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

The second command would need to pick up at the next byte and download another 100 MB chunk.

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

curl --range 100000000-199999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part2

Repeat this process until all the chunks are downloaded. The last step is to combine the chunks into a single file, which can be done with the cat command.

cat ubuntu-part? > ubuntu-18.04.3-desktop-amd64.iso

Client certificate

To access a server using certificate authentication instead of basic authentication, you can specify a certificate file with the –cert option.

curl --cert path/to/cert.crt:password ftp://example.com

cURL has a lot of options for the format of certificate files.

curl cert

There are more certificate related options, too: –cacert, –cert-status, –cert-type, etc. Check out the man page for a full list of options.

Silent cURL

If you’d like to suppress cURL’s progress meter and error messages, the -s switch provides that feature. It will still output the data you request, so if you’d like the command to be 100% silent, you’d need to direct the output to a file.

Combine this command with the -O flag to save the file in your present working directory. This will ensure that cURL returns with 0 output.

curl -s -O http://example.com

Alternatively, you could use the –output option to choose where to save the file and specify a name.

curl -s http://example.com --output index.html

curl silent

Get headers

Grabbing the header of a remote address is very simple with cURL, you just need to use the -I option.

curl -I example.com

curl headers

If you combine this with the –L option, cURL will return the headers of every address that it’s redirected to.

curl -I -L example.com

Multiple headers

You can pass headers to cURL with the -H option. And to pass multiple headers, you just need to use the -H option multiple times. Here’s an example:

curl -H 'Connection: keep-alive' -H 'Accept-Charset: utf-8 ' http://example.com

Post (upload) file

POST is a common way for websites to accept data. For example, when you fill out a form online, there’s a good chance that the data is being sent from your browser using the POST method. To send data to a website in this way, use the -d option.

curl -d 'name=geek&location=usa' http://example.com

To upload a file, rather than text, the syntax would look like this:

curl -d @filename http://example.com

Use as many -d flags as you need in order to specify all the different data or filenames that you are trying to upload.

You can the -T option if you want to upload a file to an FTP server.

curl -T myfile.txt ftp://example.com/some/directory/

Send an email

Sending an email is simply uploading data from your computer (or another device) to an email server. Since cURL is able to upload data, we can use it to send emails. There are a slew of options, but here’s an example of how to send an email through an SMTP server:

curl smtp://mail.example.com --mail-from me@example.com --mail-rcpt john@domain.com --upload-file email.txt

Your email file would need to be formatted correctly. Something like this:

cat email.txt

From: Web Administrator <me@example.com>

To: John Doe <john@domain.com>

Subject: An example email

Date: Sat, 7 Dec 2019 02:10:15

John,

Hope you have a great weekend.

-Admin

As usual, more granular and specialized options can be found in the man page of cURL.

Read email message

cURL supports IMAP (and IMAPS) and POP3, both of which can be used to retrieve email messages from a mail server.

Login using IMAP like this:

curl -u username:password imap://mail.example.com

This command will list available mailboxes, but not view any specific message. To do this, specify the UID of the message with the –X option.

curl -u username:password imap://mail.example.com -X 'UID FETCH 1234'

Difference between cURL and wget

Sometimes people confuse cURL and wget because they’re both capable of retrieving data from a server. But this is the only thing they have in common.

We’ve shown in this article what cURL is capable of. wget provides a different set of functions. wget is the best tool for downloading websites and is capable of recursively traversing directories and links to download entire sites.

For downloading websites, use wget. If using some protocol other than HTTP or HTTPS, or for uploading files, use cURL. cURL is also a good option for downloading individual files from the web, although wget does that fine, too.

I hope you find the tutorial useful. Keep coming back.

Important Facts Everyone Needs to Know About Blockchain technology

If you were to ask the general population what they know about blockchain technology, you wouldn’t be surprised to hear that most of them either know nothing at all or can connect the blockchain to cryptocurrencies. They wouldn’t be wrong. Cryptocurrency is, in fact, dependent upon blockchain technology and it is the technology that has paved the way for bitcoin to become possible. Without it, the world’s most famous and valuable crypto wouldn’t exist. This is because when someone makes a payment with bitcoin, the payment is authenticated as another block of information on the chain. The blockchain takes the place of a bank to keep a record of payments, but unlike a bank, there is no central authority. The decentralised nature of bitcoin, therefore, hinges on this blockchain acting as a public ledger available to all but completely secure.

Looking Further than Bitcoin and Crypto

Yet, this is not the only use for blockchain technology. Despite bitcoin relying on its blockchain, it doesn’t work in the other direction.

Blockchains are used for other purposes in other industries. Here are some examples of industries that have already tapped into the blockchain potential:

#1: The Music Industry

One issue constantly facing musicians and those involved with creating music is that they do not receive the money they are owed.

It is not unheard of that megastars are seeking compensation from other music organisations for not paying them the royalties they deserve. Copyright infringements are rife and the court cases to address these problems are just as common.

The blockchain can counter this issue by providing a traceable and publicly available set of information for each song and who is owed what royalties from it.

The same idea can be applied to other forms of art such as photography. Photographers can trace the use of their images on the blockchain and even allow experts to track the origin of a piece of art.

#2: The Automotive Industry

One issue when buying a car is that you can never be certain that what you are buying is exactly how it was advertised or sold to you.

People can tamper with the mileage on a car and get around telling you about its maintenance history. What you think is a vehicle with an excellent track record could have been used a lot more and have been in the garage frequently.

This is why some businesses in the automotive industry have adopted blockchains and are using them on some vehicles to record maintenance and mileage. This is to prevent odometer fraud and vehicles being inaccurately sold by criminals.

#3: The Sports Industry

Some sports teams are using blockchain to create their own tokens for fans to use to buy match tickets and merchandise.

This is a way of creating a currency that is valuable to a select community. The blockchain is also being used by teams to implement fair voting systems to do with player jerseys and alike.

Using blockchains to cast votes is also a topic being considered by governments to ensure secure election processes without the need for recounts.

#4: The Freight Industry

The freight industry is welcoming blockchains to streamline often complex processes and reduce the amount of paperwork required en route.

It would enable businesses to track packages across a destination as they are scanned by different workers. It was also rumored to be a solution to the backstop issue within the Brexit negotiations.

From these four examples, it is easy to spot blockchains that have more purpose that what we most associate them with. In fact, it could be argued that the hype of owning a Luno Bitcoin wallet and sending secure payments around the world faster and cheaper may be making the general population blind to the other possibilities at hand.

The truth is, understanding the facts around blockchains will help us look beyond cryptocurrencies. Here are some of the key facts you may not know about blockchains already.

Blockchain Also Has a Place in Science

Thanks to grants and our natural thirst for knowledge, the scientific community has been able to amass a wealth of studies that help improve policies and inform public services.

However, scientists often come stuck when they try to replicate studies to authenticate results further, or tweak studies to find out more (and further our knowledge).

This happens because the original study’s data is not publicly available or easy to access. The blockchain could help in this matter by being the place where data is stored for scientific study.

Researchers across the globe could access a public ledger of data to conduct studies that other research has been based on, allowing future results to accurately verify information or increase our understanding.

Consider how many times two different researching teams have conflicting views about the same subject. The conflict may arise due to a difference in the quality or amount of data.

Blockchain technology holding the same data set would allow all research parties to research from the same information. Although this would help scientific groups to collaborate and progress with findings, it does also call for high-quality data to be used.

Blockchain as the Answer to ID Verification

Verifying our identity has become part and parcel of modern life. It is not just airports where we have to dig out our passport, but also gyms, libraries and any other time we sign up for a membership or service.

This can be time-consuming and inconvenient, especially when each vendor wants a different type of ID or a different combination of documentation.

Although blockchain has yet to be used in this way. There is potential for blockchain to be a solution and give every citizen of a country – or a group of countries that opt into the strategy – to record personal information and their identity on the blockchain.

This would make ID verification seamless in certain locations.

EU citizens already have something similar to this with their information stored on a chip placed on their ID card. An upgraded version of this on the blockchain could be the answer, with healthcare professionals having access to this in the event of an emergency.

Soon You May Be Buying Blockchain-Based Products

The idea that blockchain technologies will be most used by businesses is not true. Yes, many businesses will adopt the technology, but the technology will also be placed in the hands of the consumer.

This is because products are also going to be made with blockchain technology powering them – and it is already happening.

Some smartphone developers have already made blockchain smartphones. Other products that are in lien to be developed include devices around the home that recolonise the way we live.

What we are referring to is devices classed within the Internet of Things (IoT)

These devices will be connected and change how we do tasks and chores at home. They will also be connected to do so, such as telling a small device in the corner that you want to watch Netflix or to turn the dishwasher on.

The problem when lots of devices are connected is that they make you more vulnerable to hackers.

Blockchains can prevent hacks and protect your data by securing your at-home network on IoT devices. Methods of combining the two are already been worked on to keep consumers safe and their data protected.

Other Facts You Should Know About Blockchain Technology

The potential for blockchain has now been well established, but what has it already achieved? Here are some shorter facts about the technology that not a lot of people realize:

  1. The person(s) who made blockchains famous and bitcoin inventor, Satoshi Nakamoto, is unknown. People have suggested the person behind the revolution to be certain individuals, but the actual identity of the person responsible remains a mystery.
  2. Blockchains do not have to be public. They can also be private, somewhat like an intranet within a business. This is what enables them to function as a source of ID without compromising on data privacy laws.
  3. It is estimated that blockchain development is at the stage the internet was at around two decades ago. Considering this and what it has achieved so far perfectly illustrates the potential blockchain technology encompasses.
  4. Blockchains are relatively untouched. Around half of the world’s population use the internet and around 0.05% of us are using blockchains. This number will rise when more businesses adopt the technology.
  5. Conventional banks are now seeking blockchains to help with their own processes. What was once a tool against fiat financial systems is now being used within them. This may make some crypto enthusiasts weep a little.
  6. A Blockchain is at its most secure stage when it is first created. Many people assume that the blockchain will become more secure in time, but this is not the case.

It Doesn’t Mean We Should Forget about Cryptos

Just because the success of blockchain technology is not tied to cryptocurrency doesn’t mean we should forget about them. Cryptocurrencies, as well as digital tokens, ICOs and smart contracts,  are all the biggest successes of blockchain to date.

The benefits of cryptocurrency are huge, with faster, cheaper and more convenient payments becoming available worldwide.

This has a significantly positive impact on unbanked populations who do not have access to a bank account. For people sending money home to underdeveloped countries, they can send more money without incurring fees or time delays.

These glimpses into cryptocurrency’s power to dustups the financial status-quo should not be forgotten as other developments occur.

What Will Blockchains Do to the Job Market?

Technology and the internet, in particular, has had a significant impact on the job market in the developed world. Many jobs were replaced with machines that could do the work just as efficiently and many of these jobs were taken up by the working classes.

The same could happen once blockchain technology reaches its golden period. Many jobs may be displaced due to businesses utilising blockchains.

For example, earlier it was discussed that freight companies may use blockchains to streamline shipping processes. There is a strong chance that this development could put some workers out of a job.

Jobs may be lost due to blockchain, and they may be lost more in manual professions. However, the blockchain may also create lots of new jobs that are not around today. Most of these jobs will be directed at tech-savvy types and us geeks.

So, Should You Invest in Blockchain Startups?

There are so many positive noises coming from industries and businesses that are using blockchains. Yet, it is crucial to realise that this trend is new.

No doubt there are investment opportunities to be secured with blockchain B2B businesses, but are the right investments with blockchain startups?

The answer may be yes, but it may be smarter to invest your money in established technology companies who already own a strong market share.

Blockchains can be made for everyone and choosing a small startup may not guarantee you success. Placing your investment with companies who are actively looking at blockchains and already have a foothold in their market could be the wiser move.

The Takeaway Fact to Remember

There is a chance that you learned a lot about blockchains in this post, but you are not likely to retain everything you learned. If you need one fact about blockchain technology to leave with – and the most important one. It is that blockchains cannot be ignored.

They are a key player in the fourth industrial revolution and in that sense, they are exceptionally disruptive to all current technology.

Consider blockchains to be the puppet masters of the future of the tech and many other industries. It may just take a little while for the curtain to be pulled back completely.

SSH port forwarding (tunneling) in Linux

In this tutorial, we will cover SSH port forwarding in Linux. This is a function of the SSH utility that Linux administrators use to create encrypted and secure relays across different systems. SSH port forwarding, also called SSH tunneling, is used to create a secure connection between two or more systems. Applications can then use these tunnels to transmit data. Your data is only as secure as its encryption, which is why SSH port forwarding is a popular mechanism to use. Read on to find out more and see how to setup SSH port forwarding on your own systems. To put it simply, SSH port forwarding involves establishing an SSH tunnel between two or more systems and then configuring the systems to transmit a specified type of traffic through that connection.

What is SSH port forwarding?

To put it simply, SSH port forwarding involves establishing an SSH tunnel between two or more systems and then configuring the systems to transmit a specified type of traffic through that connection.

There are a few different things you can do with this: local forwarding, remote forwarding, and dynamic port forwarding. Each configuration requires its own steps to setup, so we will go over each of them later in the tutorial.

Local port forwarding is used to make an external resource available on the local network. An SSH tunnel is established to a remote system, and traffic from the local network can use that tunnel to transmit data back and forth, accessing the remote system and network as if it was a part of the local network.

Remote port forwarding is the exact opposite. An SSH tunnel is established but the remote system is able to access your local network.

Dynamic port forwarding sets up a SOCKS proxy server. You can configure applications to connect to the proxy and transmit all data through it. The most common use for this is for private web browsing or to make your connection seemingly originate from a different country or location.

SSH port forwarding can also be used to setup a virtual private network (VPN). You’ll need an extra program for this called sshuttle. We cover the details later in the tutorial.

Why use SSH port forwarding?

Since SSH creates encrypted connections, this is an ideal solution if you have applications that transmit data in plaintext or use an unencrypted protocol. This holds especially true for legacy applications.

It’s also popular to use it for connecting to a local network from the outside. For example, an employee using SSH tunnels to connect to a company’s intranet.

You may be thinking this sounds like a VPN. The two are similar, but creating ssh tunnels is for specific traffic, whereas VPNs are more for establishing general connections.

SSH port forwarding will allow you to access remote resources by just establishing an SSH tunnel. The only requirement is that you have SSH access to the remote system and, ideally, public key authentication configured for password-less SSHing.

How many sessions are possible?

Technically, you can specify as many port forwarding sessions as you’d like. Networks use 65,535 different ports, and you are able to forward any of them that you want.

When forwarding traffic, be cognizant of the services that use certain ports. For example, port 80 is reserved for HTTP. So you would only want to forward traffic on port 80 if you intend to forward web requests.

The port you forward on your local system doesn’t have to match that of the remote server. For example, you can forward port 8080 on localhost to port 80 on the remote host.

If you don’t care what port you are using on the local system, select one between 2,000 and 10,000 since these are rarely used ports. Smaller numbers are typically reserved for certain protocols.

Local forwarding

Local forwarding involves forwarding a port from the client system to a server. It allows you to configure a port on your system so that all connections to that port will get forwarded through the SSH tunnel.

Use the -L switch in your ssh command to specify local port forwarding. The general syntax of the command is like this:

ssh -L local_port:remote_ip:remote_port user@hostname.com

Check out the example below:

ssh -L 80:example1.com:80 example2.com

local port forwarding

This command would forward all requests to example1.com to example2.com. Any user on this system that opens a web browser and attempts to navigate to example1.com will, in the background, have their request sent to example2.com instead and display a different website.

Such a command is useful when configuring external access to a company intranet or other private network resources.

Test SSH port forwarding

To see if your port forwarding is working correctly, you can use the netcat command. On the client machine (the system where you ran the ssh -L command), type the netcat command with this syntax:

nc -v remote_ip port_number

Test port forwarding using netcat

If the port is forwarded and data is able to traverse the connection successfully, netcat will return with a success message. If it doesn’t work, the connection will time out.

If you’re having trouble getting the port forwarding to work, make sure you’re able to ssh into the remote server normally and that you have configured the ports correctly. Also, verify that the connection isn’t being blocked by a firewall.

Persistent SSH tunnels (Using Autossh)

Autossh is a tool that can be used to create persistent SSH tunnels. The only prerequisite is that you need to have public key authentication configured between your systems, unless you want to be prompted for a password every time the connection dies and is reestablished.

Autossh may not be installed by default on your system, but you can quickly install it using apt, yum, or whatever package manager your distribution uses.

sudo apt-get install autossh

The autossh command is going to look pretty much identical to the ssh command we ran earlier.

autossh -L 80:example1.com:80 example2.com

Persistent SSH port forwarding autossh

Autossh will make sure that tunnels are automatically re-established in case they close because of inactivity, remote machine rebooting, network connection being lost, etc.

Remote forwarding

Remote port forwarding is used to give a remote machine access to your system. For example, if you want a service on your local computer to be accessible by a system(s) on your company’s private network, you could configure remote port forwarding to accomplish that.

To set this up, issue an ssh command with the following syntax:

ssh -R remote_port:local_ip:local_port user@hostname.com

If you have a local web server on your computer and would like to grant access to it from a remote network, you could forward port 8080 (common http alternative port) on the remote system to port 80 (http port) on your local system.

ssh -R 8080:localhost:80 geek@likegeeks.com

Remote port forwarding

Dynamic forwarding

SSH dynamic port forwarding will make SSH act as a SOCKS proxy server. Rather than forwarding traffic on a specific port (the way local and remote port forwarding do), this will forward traffic across a range of ports.

If you have ever used a proxy server to visit a blocked website or view location-restricted content (like viewing stuff on Netflix that isn’t available in your country), you probably used a SOCKS server.

It also provides privacy, since you can route your traffic through a SOCKS server with dynamic port forwarding and prevent anyone from snooping log files to see your network traffic (websites visited, etc).

To set up dynamic port forwarding, use the ssh command with the following syntax:

ssh -D local_port user@hostname.com

So, if we wanted to forward traffic on port 1234 to our SSH server:

ssh -D 1234 geek@likegeeks.com

Once you’ve established this connection, you can configure applications to route traffic through it. For example, on your web browser:

Socks proxy

Type the loopback address (127.0.0.1) and the port you configured for dynamic port forwarding, and all traffic will be forwarded through the SSH tunnel to the remote host (in our example, the likegeeks.com SSH server).

Multiple forwarding

For local port forwarding, if you’d like to setup more than one port to be forwarded to a remote host, you just need to specify each rule with a new -L switch each time. The command syntax is like this:

ssh -L local_port_1:remote_ip:remote_port_1 -L local_port_2:remote_ip:remote_port2 user@hostname.com

For example, if you want to forward ports 8080 and 4430 to 192.168.1.1 ports 80 and 443 (HTTP and HTTPS), respectively, you would use this command:

ssh -L 8080:192.168.1.1:80 -L 4430:192.168.1.1:443 user@hostname.com

For remote port forwarding, you can setup more than one port to be forwarded by specifying each new rule with the -R switch. The command syntax is like this:

ssh -R remote_port1:local_ip:local_port1 remote_port2:local_ip:local_port2 user@hostname.com

List port forwarding

You can see what SSH tunnels are currently established with the lsof command.

lsof -i | egrep '\<ssh\>'

SSH tunnels

In this screenshot, you can see that there are 3 SSH tunnels established. Add the -n flag to have IP addresses listed instead of resolving the hostnames.

lsof -i -n | egrep '\<ssh\>'

SSH tunnels n flag

Limit forwarding

By default, SSH port forwarding is pretty open. You can freely create local, remote, and dynamic port forwards as you please.

But if you don’t trust some of the SSH users on your system, or you’d just like to enhance security in general, you can put some limitations on SSH port forwarding.

There are a couple of different settings you can configure inside the sshd_config file to put limitations on port forwarding. To configure this file, edit it with vi, nano, or your favorite text editor:

sudo vi /etc/ssh/sshd_config

PermitOpen can be used to specify the destinations to which port forwarding is allowed. If you only want to allow forwarding to certain IP addresses or hostnames, use this directive. The syntax is as follows:

PermitOpen host:port

PermitOpen IPv4_addr:port

PermitOpen [IPv6_addr]:port

AllowTCPForwarding can be used to turn SSH port forwarding on or off, or specify what type of SSH port forwarding is permitted. Possible configurations are:

AllowTCPForwarding yes #default setting

AllowTCPForwarding no #prevent all SSH port forwarding

AllowTCPForwarding local #allow only local SSH port forwarding

AllowTCPForwarding remote #allow only remote SSH port forwarding

To see more information about these options, you can check out the man page:

man sshd_config

Low latency

The only real problem that arises with SSH port forwarding is that there is usually a bit of latency. You probably won’t notice this as an issue if you’re doing something minor, like accessing text files or small databases.

The problem becomes more apparent when doing network intensive activities, especially if you have port forwarding set up as a SOCKS proxy server.

The reason for the latency is because SSH is tunneling TCP over TCP. This is a terribly inefficient way to transfer data and will result in slower network speeds.

You could use a VPN to prevent the issue, but if you are determined to stick with SSH tunnels, there is a program called sshuttle that corrects the issue. Ubuntu and Debian-based distributions can install it with apt-get:

sudo apt-get install sshuttle

If you package manager on your distribution doesn’t have sshuttle in its repository, you can clone it from GitHub:

git clone https://github.com/sshuttle/sshuttle.git

cd sshuttle

./setup.py install

Setting up a tunnel with sshuttle is different from the normal ssh command. To setup a tunnel that forwards all traffic (akin to a VPN):

sudo sshuttle -r user@remote_ip -x remote_ip 0/0 -vv

sshuttle command

Break the connection with a ctrl+c key combination in the terminal. Alternatively, to run the sshuttle command as a daemon, add the -D switch to your command.

Want to make sure that the connection was established and the internet sees you at the new IP address? You can run this curl command:

curl ipinfo.io

curl IP address

I hope you find the tutorial useful. Keep coming back.

December 27, 2019

Mustafa Akgül Özgür Yazılım Kış Kampı [2020]

Linux Kullanıcıları Derneği ve Eskişehir Anadolu Üniversitesi’nin organizasyonunu üstlendiği etkinlik bu yıl 25 – 28 Ocak tarihleri arasında Eskişehir Anadolu Üniversitesi Yunus Emre Yerleşkesi’nde İktisadi İdari Bilimler Fakültesi'nde yapılıyor. Farklı alanlarda, farklı bilgi düzeylerine hitap eden paralel sınıflarda gerçekleştirilen eğitimlere katılım ücretsizdir. Katılımcılardan

December 26, 2019

İlledelinux Debian System

Adını İlledelinux Debian System olarak belirlediğim farklı bir çalışma paylaşacağım. Debian Buster tabanlı inşa ettiğim bu çalışma "Build your own system" yani kendi sistemini kendin yap anlayışına dayanıyor. Ancak kendi sistemini yaparken de yeni veya deneyimli bütün kullanıcılara kolaylık hedeflendi. Bu çalışmada bütün seçenekler tamamen kullanıcıya bırakıldı. İster paket seçimi isterse

December 25, 2019

Openbox başlangıçta otomatik komut çalıştır

Openbox oturumunda başlangıçta çalıştırmak istediğiniz komut, script, program veya herhangi bir öğe için resimde arayüzü görülen ve bu işi otomatik yapan bir program uyarladım. Komutu girip Ok tuşuna tıklıyorsunuz hepsi bu ve başlangıçta girdiğiniz komut otomatik çalışıyor. Openbox kullanımını kolaylaştıran bu işlemi kullanmak isterseniz sisteminize entegre etmek için işleme başlayalım. Önce

December 24, 2019

NETCAT – NC kullanarak veri transferi sağlamak

Netcat ile iki bilgisayarlar arası veri transferi nasıl yapılır ona bakacağız. Bu işlemi makineler arasında port açarak gerçekleştireceğiz. Alıcı bilgisayar(receive)’da aşağıdaki komut çalıştırılarak port(3434) aktif edilip dosyam.txt için tünel açılır. nc -l -p 3434 > dosyam.txt Gönderici(sender) makinede’de aşağıdaki komut çalıştırılarak dosyam.txt aktarılır. nc 192.168.1.12 3434 < dosyam.txt

21-22 Aralık 2019 – Ücretsiz Open Source ve Linux Administrator Eğitim Etkinliği Hakkında

Sosyal sorumluluk çerçevesinde üstlendiğim projem ve aynı zamanda Opensource dünyası ile beraber Linux sistemlerin öneminin, gerekliliğinin, farkındalığının oluşturulması, sektörde çalışan ya da öğrenci genç arkadaşlara yön vermek, farkındalık oluşturmanın yanısıra bilgi ve becerileri arttırmak amacıyla ücretsiz olarak Linux Administrator eğitimini 21-22 Aralık 2019 tarihlerinde gerçekleştirdim. Eğitimi RedHat ve LPI içeriklerinden derleyerek, RedHat/CentOS ve Ubuntu/Debian dağıtımlarını...

Continue Reading

December 23, 2019

15+ examples for Linux cURL command

In this tutorial, we will cover the cURL command in Linux. Follow along as we guide you through the functions of this powerful utility with examples to help you understand everything it’s capable of. The cURL command is used to download or upload data to a server, using one of its 20+ supported protocols. This data could be a file, email message, or web page. What is cURL command? cURL is an ideal tool for interacting with a website or API, sending requests and displaying the responses to the terminal or logging the data to a file. Sometimes it’s used as part of a larger script, handing off the retrieved data to other functions for processing. Since cURL can be used to retrieve files from servers, it’s often used to download part of a website. It performs this function well, but sometimes the wget command is better suited for that job. We’ll go over some of the differences and similarities between wget and cURL later in this article. We’ll show you how to get started using cURL in the sections below.

Download a file

The most basic command we can give to cURL is to download a website or file. cURL will use HTTP as its default protocol unless we specify a different one. To download a website, just issue this command:

curl http://www.google.com

Of course, enter any website or page that you want to retrieve.

curl basic command

Doing a basic command like this with no extra options will rarely be useful, because this only tells cURL to retrieve the source code of the page you’ve provided.

curl output

When we ran our command, our terminal is filled with HTML and other web scripting code – not something that is particularly useful to us in this form.

Let’s download the website as an HTML document instead, that way the content can be displayed. Add the –output option to cURL to achieve this.
curl output switch

Now the website we downloaded can be opened and displayed in a web browser.

downloaded website

If you’d like to download an online file, the command is about the same. But make sure to append the –output option to cURL as we did in the example above.

If you fail to do so, cURL will send the binary output of the online file to your terminal, which will likely cause it to malfunction.

Here’s what it looks like when we initiate the download of a 500KB word document.

curl download document

The word document begins to download and the current progress of the download is shown in the terminal. When the download completes, the file will be available in the directory we saved it to.

In this example, no directory was specified, so it was saved to our present working directory (the directory from which we ran the cURL command).

Also, did you notice the -L option that we specified in our cURL command? It was necessary in order to download this file, and we go over its function in the next section.

Follow redirect

If you get an empty output when trying to cURL a website, it probably means that the website told cURL to redirect to a different URL. By default, cURL won’t follow the redirect, but you can tell it to with the -L switch.

curl -L www.likegeeks.com

curl follow redirect

In our research for this article, we found it was necessary to specify the -L on a majority of websites, so be sure to remember this little trick. You may even want to append it to the majority of your cURL commands by default.

Stop and resume download

If your download gets interrupted, or if you need to download a big file but don’t want to do it all in one session, cURL provides an option to stop and resume the transfer.

To stop a transfer manually, you can just end the cURL process the same way you’d stop almost any process currently running in your terminal, with a ctrl+c combination.

curl stop download

Our download has begun, but was interrupted with ctrl+c, now let’s resume it with the following syntax:

curl -C - example.com/some-file.zip --output MyFile.zip

The -C switch is what resumes our file transfer, but also notice that there is a dash (-) directly after it. This tells cURL to resume the file transfer, but to first look at the already downloaded portion in order to see the last byte downloaded and determine where to resume.

resume file download

Our file transfer was resumed and then proceeded to finish downloading successfully.

Specify timeout

If you want cURL to abandon what it’s doing after a certain amount of time, you can specify a timeout in the command. This is especially useful because some operations in cURL don’t have a timeout by default, so one needs to be specified if you don’t want it getting hung up indefinitely.

You can specify a maximum time to spend executing a command with the -m switch. When the specified time has elapsed, cURL will exit whatever it’s doing, even if it’s in the middle of downloading or uploading a file.

cURL expects your maximum time to be specified in seconds. So, to timeout after one minute, the command would look like this:

curl -m 60 example.com

Another type of timeout that you can specify with cURL is the amount of time to spend connecting. This helps make sure that cURL doesn’t spend an unreasonable amount of time attempting to contact a host that is offline or otherwise unreachable.

It, too, accepts seconds as an argument. The option is written as –connect-timeout.

curl --connect-timeout 60 example.com

Using a username and a password

You can specify a username and password in a cURL command with the -u switch. For example, if you wanted to authenticate with an FTP server, the syntax would look like this:

curl -u username:password ftp://example.com

curl authenticate

You can use this with any protocol, but FTP is frequently used for simple file transfers like this.

If we wanted to download the file displayed in the screenshot above, we just issue the same command but use the full path to the file.

curl -u username:password ftp://example.com/readme.txt

curl authenticate download

Use proxies

It’s easy to direct cURL to use a proxy before connecting to a host. cURL will expect an HTTP proxy by default, unless you specify otherwise.

Use the -x switch to define a proxy. Since no protocol is specified in this example, cURL will assume it’s an HTTP proxy.

curl -x 192.168.1.1:8080 http://example.com

This command would use 192.168.1.1 on port 8080 as a proxy to connect to example.com.

You can use it with other protocols as well. Here’s an example of what it’d look like to use an HTTP proxy to cURL to an FTP server and retrieve a file.

curl -x 192.168.1.1:8080 ftp://example.com/readme.txt

cURL supports many other types of proxies and options to use with those proxies, but expanding further would be beyond the scope of this guide. Check out the cURL man page for more information about proxy tunneling, SOCKS proxies, authentication, etc.

Chunked download large files

We’ve already shown how you can stop and resume file transfers, but what if we wanted cURL to only download a chunk of a file? That way, we could download a large file in multiple chunks.

It’s possible to download only certain portions of a file, in case you needed to stay under a download cap or something like that. The –range flag is used to accomplish this.

curl range man

Sizes must be written in bytes. So if we wanted to download the latest Ubuntu .iso file in 100 MB chunks, our first command would look like this:

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

The second command would need to pick up at the next byte and download another 100 MB chunk.

curl --range 0-99999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part1

curl --range 100000000-199999999 http://releases.ubuntu.com/18.04/ubuntu-18.04.3-desktop-amd64.iso ubuntu-part2

Repeat this process until all the chunks are downloaded. The last step is to combine the chunks into a single file, which can be done with the cat command.

cat ubuntu-part? > ubuntu-18.04.3-desktop-amd64.iso

Client certificate

To access a server using certificate authentication instead of basic authentication, you can specify a certificate file with the –cert option.

curl --cert path/to/cert.crt:password ftp://example.com

cURL has a lot of options for the format of certificate files.

curl cert

There are more certificate related options, too: –cacert, –cert-status, –cert-type, etc. Check out the man page for a full list of options.

Silent cURL

If you’d like to suppress cURL’s progress meter and error messages, the -s switch provides that feature. It will still output the data you request, so if you’d like the command to be 100% silent, you’d need to direct the output to a file.

Combine this command with the -O flag to save the file in your present working directory. This will ensure that cURL returns with 0 output.

curl -s -O http://example.com

Alternatively, you could use the –output option to choose where to save the file and specify a name.

curl -s http://example.com --output index.html

curl silent

Get headers

Grabbing the header of a remote address is very simple with cURL, you just need to use the -I option.

curl -I example.com

curl headers

If you combine this with the –L option, cURL will return the headers of every address that it’s redirected to.

curl -I -L example.com

Multiple headers

You can pass headers to cURL with the -H option. And to pass multiple headers, you just need to use the -H option multiple times. Here’s an example:

curl -H 'Connection: keep-alive' -H 'Accept-Charset: utf-8 ' http://example.com

Post (upload) file

POST is a common way for websites to accept data. For example, when you fill out a form online, there’s a good chance that the data is being sent from your browser using the POST method. To send data to a website in this way, use the -d option.

curl -d 'name=geek&location=usa' http://example.com

To upload a file, rather than text, the syntax would look like this:

curl -d @filename http://example.com

Use as many -d flags as you need in order to specify all the different data or filenames that you are trying to upload.

You can the -T option if you want to upload a file to an FTP server.

curl -T myfile.txt ftp://example.com/some/directory/

Send an email

Sending an email is simply uploading data from your computer (or another device) to an email server. Since cURL is able to upload data, we can use it to send emails. There are a slew of options, but here’s an example of how to send an email through an SMTP server:

curl smtp://mail.example.com --mail-from me@example.com --mail-rcpt john@domain.com --upload-file email.txt

Your email file would need to be formatted correctly. Something like this:

As usual, more granular and specialized options can be found in the man page of cURL.

Read email message

cURL supports IMAP (and IMAPS) and POP3, both of which can be used to retrieve email messages from a mail server.

Login using IMAP like this:

curl -u username:password imap://mail.example.com

This command will list available mailboxes, but not view any specific message. To do this, specify the UID of the message with the –X option.

curl -u username:password imap://mail.example.com -X 'UID FETCH 1234'

Difference between cURL and wget

Sometimes people confuse cURL and wget because they’re both capable of retrieving data from a server. But this is the only thing they have in common.

We’ve shown in this article what cURL is capable of. wget provides a different set of functions. wget is the best tool for downloading websites and is capable of recursively traversing directories and links to download entire sites.

For downloading websites, use wget. If using some protocol other than HTTP or HTTPS, or for uploading files, use cURL. cURL is also a good option for downloading individual files from the web, although wget does that fine, too.

I hope you find the tutorial useful. Keep coming back.

Feeds