Network Nuts https://networknuts.net Linux, AWS, Devops & Azure Training & Certification Mon, 27 Jan 2020 08:19:28 +0000 en-GB hourly 1 https://wordpress.org/?v=5.4.2 https://networknuts.net/wp-content/uploads/2019/11/NN-Icon-2-1-150x150.png Network Nuts https://networknuts.net 32 32 Kubernetes Terminology https://networknuts.net/kubernetes-terminology/?utm_source=rss&utm_medium=rss&utm_campaign=kubernetes-terminology Mon, 27 Jan 2020 08:17:46 +0000 https://networknuts.net/?p=25999 How do you explain Kubernetes terminology and orchestration in plain terms that people can at least begin to understand? Heck, how do you even say Kubernetes? (Pronunciations may vary a bit, but the agreed-upon origin is from the Greek, meaning “helmsman” or “sailing master.”)

Here’s how Red Hat technology evangelist Gordon Haff explains Kubernetes in his book, “From Software and Vats to Programs and Apps,” co-authored with Red Hat cloud strategist William Henry:

“Kubernetes, or k8s (k, 8 characters, s… get it?), or ‘kube’ if you’re into brevity, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications,” Haff and Henry write. “In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.” Understand basic kubernetes terminology to understand the complete cluster.

Kubernetes Terminology

kubernetes terminology and kubernetes cluster components
Kubernetes Cluster and Kubernetes Terminology

Cluster

A cluster is a group or bunch of nodes that run your containerized applications. You manage the cluster and everything it includes – in other words, you manage your application(s) – with Kubernetes.

Node

Nodes are comprised of physical or virtual machines on your cluster; these “worker” machines have everything
necessary to run your application containers, including the container runtime and other critical services.

Pod

This is essentially the smallest deployable unit of the Kubernetes ecosystem; more accurately, it’s the smallest
object. A pod specifically represents a group of one or more containers running together on your cluster.

Containers sound so simple. We know what the word means: It’s something that you use to hold stuff. Just do a Google image search: The top visual explainer is a shipping container.

This translates reasonably well to a software context: A container is still essentially something that we put stuff in; in this case, the “stuff” is an application’s code as well as everything that code needs to run properly.

Simple enough, right?

“Containers solve the packaging problem of how to quickly build and deploy applications. They’re akin to virtual machines, but with two notable differences: they’re lightweight and spun up in seconds; and they move reliably from one environment to another (what works on the developer’s computer will work the same in dev/test and production).”

Kubernetes API

The Kubernetes API, in kubernetes terminology, is the lifeblood of the system. You may have heard of Kubernetes described as a “declarative” tool – in other words, Kubernetes lets you say “this is how I want things to run,” and then it does what’s needed to make that happen in a highly automated way. The Kubernetes API helps make that a reality. The official Kubernetes site defines the Kubernetes API as “the application that serves Kubernetes functionality through a RESTful interface and stores the state of the cluster.”

Kubernetes Control Plane

This sits between a cluster and Kubernetes basically as a necessary intermediary; it makes sure everything behaves
properly – like a chaperon at a container dance party. When people extol automation as one of the key benefits of
Kubernetes and container orchestration, this is a key piece. Says the Kubernetes official site: “The Control Plane
maintains a record of all of the Kubernetes Objects in the system, and runs continuous control loops to manage
those objects’ state.” The control plane continuously checks and rechecks that everything matches your desired
state. In general, the job of a controller in Kubernetes – there are multiple types – is to take actions needed to
manage a specific type of resource.

Master

The Kubernetes master maintains the desired state of your cluster; you will commonly see it referred to as the master node. Every cluster has a master node, as well as several “worker” nodes. The master includes three critical processes for managing the state of your cluster: kube-apiserver, kube-controller-manager and kube-scheduler. When you make changes, you’re almost always making them to the master node, not to each individual node in a cluster.

Kubectl

Simply put, in kubernetes terminology, kubectl is a command line interface (CLI) for managing operations on your Kubernetes clusters. It does so by communicating with the Kubernetes API. Kubectl runs on every node and communicate with master.

Volume

A volume is simply a directory of data; it lives within a pod and can be accessed by any container running in that pod. A volume is the abstraction that lets Kubernetes deal with the ephemeral nature of containers; when a container is retired, the volume (and its data) continues to exist within the pod, still accessible to other containers. It exists as long as its pod exists; once the latter “dies,” so does the volume and its data.

Persistent Volume

Speaking of ephemerality and data: Persistent volumes deal with the issue of storage that needs to exist outside of
the lifetime of any particular container or application, whereas general volumes deal with compute. This becomes
particularly important when you’re discussing stateful applications like databases.

Certified Kubernetes Administrator Training is the best place to start with.

]]>
Netflix AWS Case Study https://networknuts.net/netflix-aws-case-study/?utm_source=rss&utm_medium=rss&utm_campaign=netflix-aws-case-study Mon, 13 Jan 2020 09:01:01 +0000 https://networknuts.net/?p=25844 Netflix AWS Case Study

Netflix was originally a DVD shipping business where they would send out DVDs of your chosen programs to you. This was going well until 2008 where they experienced a major database loss and for 3 days could not ship out any DVDs to their customers. That was when the senior management at Netflix realized that they had to shift from continuous vertical scaling which leads to single points of failure to a more reliable and scalable horizontal scaling system. They chose Amazon Web Services despite having Amazon as a competitor (Amazon has their own streaming service known as Amazon Prime) because AWS provided them with the greatest scaling capabilities and the biggest set of available features. It took 7 years of migration for Netflix to shut down their last remaining data centres and move completely to the cloud.

Moving to the cloud has allowed Netflix to keep their existing members well engaged with overall viewing growing exponentially.

A close up of a logo

Description automatically generated

Netflix itself has continued to evolve rapidly with using many new features and relying on ever-growing volumes of data. Supporting this fast growth would not be possible earlier using their own in-house data centres. Netflix could not have racked the servers fast enough to support their own growth. While Cloud brings elasticity, which allows Netflix to add thousands of virtual servers and petabytes of storage within minutes which makes the whole process easier.

As of January 2016, Netflix has expanded into 130 new countries. It uses multiple AWS Cloud regions which are spread all over the world to create a better and more enjoyable streaming experience for Netflix members wherever they are.

Netflix relies on Cloud for all its scalability, computing and storage needs (not only video streaming) – Netflix business logic, distributed databases, big data processing, analytics, recommendations, transcoding and hundreds of other functions that are used by Netflix all go through their Cloud infrastructure.  Netflix also has their own Content Delivery Network (CDN) known as Netflix Open Connect which is used to deliver videos globally in an efficient manner.

When Netflix was using their own data centres, they faced a lot of outages. Cloud Computing is not perfect either, even though Netflix has hit some rough patches in the cloud, a steady increase in the overall availability has been noticed. Failures are ultimately unavoidable in any large-scale distribution system, even a cloud one. However, a Cloud based system allows you to create redundancy measures while become quite helpful. Cloud Computing has made it possible to survive failures without impacting the member experience.  

Netflix did not shift to cloud for cost reduction reasons, but Netflix’s cloud costs ended up being a fraction of their cost which was pleasant surprise. This was due to the elasticity factor of cloud computing, enabling Netflix to continuously optimize instances to grow and shrink as per requirement without the need to maintain large capacity machines. Economies of Scale helps Netflix in this scenario.

The benefits are very clear, but it still took seven years for Netflix to complete the migration. Moving to the cloud is a lot of work and a lot of factors need to be considered. Netflix could easily move all of its existing systems to AWS but bringing existing systems also brings all the problems and limitations that were present. So, Netflix took the cloud native approach, they rebuilt all of their technology and fundamentally changed the way they operate the whole company. Netflix migrated from a single application to thousands of micro-services.  

]]>
RHCSA 8 Sample Exam https://networknuts.net/rhcsa-8-sample-exam/?utm_source=rss&utm_medium=rss&utm_campaign=rhcsa-8-sample-exam Sat, 11 Jan 2020 03:29:06 +0000 https://networknuts.net/?p=25834 Introduction

This practice exam is for those that have gone through an RHCSA course/book and want to test their knowledge before sitting the RHCSA 8 exam.

  • Optional Automatic Exam Setup Available**

Here is an automated exam environment deployment for Mac/Linux/Windows that deploys the practice exam environment for you, including IPA server/client installation and configuration. You can also use your own lab environment. Navigate to the below repo and follow the README instructions:
https://github.com/rdbreak/rhcsa8env

It’s recommended you make a separate ~/bin directory and then clone the environment. Check the included README.md for the environment info and instructions.

Client Set Up

Hostname – system.eight.example.com

  1. IP – 192.168.55.151/24
  2. DNS – 192.168.55.150
  3. GW – 192.168.5.1

Install “Workstation”, then add “Development Tools, Graphical Administration Tools, and Container tools” to your installation. Configure the partitions as follows:

  1. 15GB disk space with LVM partitions.
    1. 10GB – /
    2. 1GB – swap
    3. 1GB – boot
    4. 4GB – unallocated space
  2. If you’re using a custom environment, then add an additional 5GB disk for use in the exam. If you’re using the automated deployment, then an additional disk is already supplied for you to use.


NOTE – The below questions assume you’re using the automated deployment but you can also use a practice environment you created. However, you will have to set up your own repo, change host names, IP addresses, etc to reflect your own environment details.

Exam

Ensure all the tasks are implemented with firewalld and SELinux enabled. Your server should be able to survive a reboot. Good luck!

  1. Interrupt the boot process and reset the root password. Change it to “wander” to gain access to the system.
  2. Repos are available from the repo server at http://repo.eight.example.com/BaseOS and and http://repo.eight.example.com/AppStream for you to use during the exam.
  3. The system time should be set to your (or nearest to you) timezone and ensure NTP sync is configured.
  4. Add the following secondary IP addresses statically to your current running interface. Do this in a way that doesn’t compromise your existing settings:
    1. IPV4 – 10.0.0.5/24
    2. IPV6 – fd01::100/64
  5. Enable packet forwarding on system1. This should persist after reboot.
  6. System1 should boot into the multiuser target by default and boot messages should be present (not silenced).
  7. Create a new 2GB volume group named “vgprac”.
  8. Create a 500MB logical volume named “lvprac” inside the “vgprac” volume group.
  9. The “lvprac” logical volume should be formatted with the xfs filesystem and mount persistently on the /mnt/lvprac directory.
  10. Extend the xfs filesystem on “lvprac” by 500MB.
  11. Use the appropriate utility to create a 5TiB thin provisioned volume.
  12. Configure a basic web server that displays “Welcome to the web server” once connected to it. Ensure the firewall allows the http/https services.
  13. Find all files that are larger than 5MB in the /etc directory and copy them to /find/largefiles
  14. Write a script named awesome.sh in the root directory on client1.
    1. If “me” is given as an argument, then the script should output “Yes, I’m awesome.”
    2. If “them” is given as an argument, then the script should output “Okay, they are awesome.”
    3. If the argument is empty or anything else is given, the script should output “Usage ./awesome.sh me|them”
  15. Create users phil, laura, stewart, and kevin.
    1. All new users should have a file named “Welcome” in their home folder after account creation.
    2. All user passwords should expire after 60 days and be atleast 8 characters in length.
    3. phil and laura should be part of the “accounting” group. If the group doesn’t already exist, create it.
    4. stewart and kevin should be part of the “marketing” group. If the group doesn’t already exist, create it.
  16. Only members of the accounting group should have access to the “/accounting” directory. Make laura the owner of this directory. Make the accounting group the group owner of the “/accounting” directory.
  17. Only members of the marketing group should have access to the “/marketing” directory. Make stewart the owner of this directory. Make the marketing group the group owner of the “/marketing” directory.
  18. New files should be owned by the group owner and only the file creator should have the permissions to delete their own files.
  19. Create a cron job that writes “This practice exam was easy and I’m ready to ace my RHCSA” to /var/log/messages at 12pm only on weekdays.

Sample Exam Credit goes to: https://github.com/rdbreak

If you are completely new to Linux, I would suggest that you go through the basic Linux program also known as RHCSA: Linux Training
If you are an expert with Linux, grow your career with RHCEAWSDevopsOpenstack or Openshift.

]]>
Controlling Resource Limit on Services using CGroup https://networknuts.net/controlling-resource-limit-on-services-using-cgroup/?utm_source=rss&utm_medium=rss&utm_campaign=controlling-resource-limit-on-services-using-cgroup Sun, 05 Jan 2020 08:00:50 +0000 https://networknuts.net/?p=25759 While running critical services on the server. We might want to restrict certain services or guarantee certain amount of resources to specific services. We want controlling resource limit on services using cgroup.

We can restrict / guarantee certain amount of resources like cpu, memory and block I/O to critical services using cgroups.

Control groups (cgroups) are a Linux kernel mechanism for fine-grained control of resources. With cgroups, resources are placed in controllers representing the type of resource; for example, cpu for CPU time, memory for memory usage, and blkio for disk I/O.

Controllers can be subdivided, as represented by a tree structure, with different weights or limits associated to branches and leaves, which are the cgroups. Each cgroup has zero or more processes associated with it. Resources are shared equally in a cgroup by default, but different limits and weights can be set on child cgroups as long as they do not exceed the parent cgroup limits. New cgroups inherit the limits set on its parent cgroup, unless explicitly overridden.

The hierarchy (control groups tree) is defined by providing structure to cgroups virtual file system, mounted by default on the /sys/fs/cgroup/ directory. It is done manually by creating and removing sub-directories in /sys/fs/cgroup/. Alternatively, by using the systemd system and service manager.

The resource controllers (a kernel component) then modify the behavior of processes in cgroups by limiting, prioritizing or allocating system resources, (such as CPU time, memory, network bandwidth, or various combinations) of those processes.

The added value of cgroups is process aggregation which enables division of hardware resources among applications and users. Thereby an increase in overall efficiency, stability and security of users’ environment can be achieved.

Controlling Resource Limit on Services using CGroup

Managing cgroups with systemd daemon is a simple way to handle complex resource configurations. By default, systemd subdivides the cpu, cpuacct, blkio, and memory cgroups into three equal slices: system for system services and daemons, machine for virtual machines and containers, and user for user sessions.

I don’t want to go into deep dive of cgroups in this post. I just want you to understand how to limit / guarantee resources to a important service using cgroups.

You can learn more on cgroups in this post on Red Hat Blog.

The easiest way is to to use or implement cgroups is by using “systemctl” command.

Using the systemctl set-property command sets the specified unit properties at runtime.

Changes are applied immediately and stored persistently.

In this example I am limiting sshd service to use only 500M of phyisical memory using cgroups and implemented using “systemctl set-property” command .

Step #1

As expected sshd service is currently running and is not under any control. So the service can dominate a resource. We can verify it by using “systemctl status sshd” command.

systemctl status sshd
checking status of sshd service

Step #2

Now lets use systemctl set-property command to configure cgroups limit on sshd service.

limit resources for sshd service using cgroups
limit resources for sshd service using cgroups

systemctl set-property sshd.service MemoryAccounting=yes

systemctl set-property sshd.service MemoryLimit=500M

I can slso add the –runtime option in the systemctl set-property command, which will applies the changes immediately, but the settings will not be retained after a service reload. So, I am not using it.

You might need to reload the systemctl daemon using “systemctl daemon-reload” command

systemctl daemon-reload

Then you can use the “systemctl restart sshd” to reload the service itself.

systemctl restart sshd

Step #3

Check the service status again and you will find the limit imposed on the service using cgroups. It will also automatically create a directory and the configurations are stored permanently in the file.

limit imposed on sshd service using cgroup
limit imposed on sshd service using cgroup

Step #4

It will also automatically create a directory “sshd.service.d” under “/etc/systemd/system/” and the limits imposed using systemctl commands will be permanently stored there in .conf files.

So the changes are permanent and will be available across the reboot.

controlling resource limit on services using cgroup
limits are permanently stored inside /etc/systemd/system/<service-name.service.d> directory

Step #5

These controlling resource limits on services using cgroup will also be available and checked even after you restart sshd service.

controlling resource limit on services using cgroup

]]>
Understanding CPU Affinity on Linux Server https://networknuts.net/understanding-cpu-affinity-on-linux-server/?utm_source=rss&utm_medium=rss&utm_campaign=understanding-cpu-affinity-on-linux-server Wed, 11 Dec 2019 12:48:16 +0000 https://networknuts.net/?p=1613 Processor affinity, or CPU pinning or “cache affinity”, enables the binding and unbinding of a process or a thread to a central processing unit (CPU) or a range of CPUs, so that the process or thread will execute only on the designated CPU or CPUs rather than any CPU. So lets begin understanding CPU affinity.

Understanding CPU Affinity

Processor affinity takes advantage of the fact that remnants of a process that was run on a given processor may remain in that processor’s state (for example, data in the cache memory) after another process was run on that processor. Scheduling that process to execute on the same processor improves its performance by reducing performance-degrading events such as cache misses. A practical example of processor affinity is executing multiple instances of a non-threaded application, such as some graphics-rendering software.

understanding cpu affinity
with cpu affinity / pinning, we can bind a process to a particular cpu to increase performance and cache hits

Under normal conditions, the kernel determines the CPUs a process runs. Every time the scheduler reschedules a process, it can go to any of the available CPUs. While this is fine for most workloads, sometimes it is desirable to limit on which CPU(s) a process is allowed to run. For example, limiting a memory-intensive process to just one or two CPUs increases the chances of a cache hit, thus increasing overall performance.

Benefit of CPU Affinity

The benefit of CPU affinity is optimizing cache performance. Multiprocessing computers go through a lot of trouble to keep the processor caches valid. Data can be kept in only one processor’s cache at a time. Otherwise, the processor’s cache may grow out of sync, leading to the question, who has the data that is the most up-to-date copy of the main memory? But the real problem comes into play when processes bounce between processors: they constantly cause cache in-validations, and the data they want is never in the cache when they need it. Thus, cache miss rates (CPU Cache) grow very large. CPU affinity protects against this and improves cache performance.

Configuring CPU Affinity for Apache Webserver

Now, after understanding CPU affinity on Linux servers. Lets see how to configure cpu affinity for our webserver running on Red Hat Enterprise Linux 8 (RHEL 8) server.

Apache is already running on my RHEL server. And by default cpu affinity is not configured.

configuring cpu affinity on Linux server
apache is running on RHEL 8 server

We will be installing “tuna” application to check on which CPU apache is running.

yum install tuna
installing "tuna" application to check cpu affinity
installing “tuna” application to check cpu affinity

Using “tuna” to check on which CPU the httpd process is running.

tuna -t httpd -P
tuna -t httpd -P

We can clearly see under the “affinity” column that httpd is running on all CPU cores (0,1,2). Cpu Index 0 is for first CPU and so on.

Now lets pin down the httpd process to CPU 0.

Create a directory “httpd.service.d” under /etc/systemd/system/

mkdir -p /etc/systemd/system/httpd.service.d/
create directory for service httpd

Move inside the directory and create a cpuaffinity.conf (or any name with .conf extension) file with following contents.

[Service]
CPUAffinity=0
create file under /etc/systemd/system/ for cpu affinity

Finally reload the systemd daemon and restart the httpd service for implementing cpu affinity for httpd process.

Once its done, we can confirm the cpu affinity of httpd process to CPU 0 under “affinity” column by giving “tuna” command again.

systemctl daemon-reload
systemctl restart httpd
tuna -t httpd -P

Honestly, I highly recommend taking Red Hat Performance Tuning (RH442) training after you are done with your RHCSA or RHCE training. RH442 actually helps you understand the insights of Linux OS.

]]>
How to Hide Processes from Normal Users https://networknuts.net/how-to-hide-processes-from-normal-users/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-hide-processes-from-normal-users Mon, 09 Dec 2019 15:39:58 +0000 https://networknuts.net/?p=1563 If you are administering a multi-user Linux server or system. Then all the users, including normal or ordinary users can see the processes run by other users including root and system processes. Which is the default behavior of Linux server. On a critical server, we might want to hide processes from normal users.

But the default behavior is not good for security reasons. Where all the users can see all the processes, those from other users and also processes owned by “root” user.

You might want to know how to hide Linux processes from normal users. Specially to hide processes run by root from ordinary users.

How to Hide Processes from Normal Users

When looking in /proc you will discover a lot of files and directories. Many of them are just numbers, which represent the information about a particular process ID (PID).

By default, Linux systems are deployed to allow all local users to see this all information. This includes process information from other users. This could include sensitive details that you may not want to share with other users. By applying some file system configuration tweaks, we can change this behavior and improve the security of the system.

If you’re running a recent Kernel, (version 3.2 or higher), you can achieve this benefit of hiding Linux processes of other users by mounting the /proc filesystem with the new hidepid option.

Understanding /proc

The proc filesystem is a pseudo-filesystem which provides an interface to kernel data structures. It is commonly mounted at /proc. Typically, it is mounted automatically by the system, but it can also be mounted manually using a mount command.

Most of the files in the proc filesystem are read-only, but some files are writable, allowing kernel variables to be changed.

  1. hidepid=0 (default): This is the default setting and gives you the default behaviour.
  2. hidepid=1: With this option an normal user would not see other processes but their own about ps, top etc, but he is still able to see process IDs in /proc
  3. hidepid=2: Users are only able too see their own processes (like with hidepid=1), but also the other process IDs are hidden for them in /proc

Default behaviuor is a normal user is able to see all the proccesses running in the system. Including processes of other users and root user.

As you can see here, a normal user alok, is able to see all the processes running on the Linux server using top command and ps command.

Using top command

hide procesess from normal users
normal user is able to see all the processes running on the system

Using ps command

normal user is seeing all the processes using ps command
normal user is able to see all the processes running on the system

Now lets remount the proc with hidepid=2. So normal users will not be able to view processes of other users and root. Which is a good security setting on a critical Linux server.

remount the /proc with the option hidepid=2

Now, once its done a normal user (user alok here), will not be able to see processes by other users and user root.

Of course, you can make it permanent by editing /etc/fstab with the entry

proc    /proc    proc    defaults,hidepid=2     0     0

Now if normal user, alok, tries to use the top and ps command again. He will only be able to see the processes owned by him. Thus we are successful in hide processes from normal users.

Using ps command

hide processes from normal users
normal user is unable to see processes by other user and user root

Using top command

hide processes from normal users using hidepid option
normal user is unable to see processes by other user and user root

You can learn more on Linux server administration by joining RHCSA training. RHCSA level of knowledge is must for getting any job in IT.

]]>
Questions to ask in an interview https://networknuts.net/questions-to-ask-in-an-interview/?utm_source=rss&utm_medium=rss&utm_campaign=questions-to-ask-in-an-interview Mon, 09 Dec 2019 08:38:46 +0000 https://networknuts.net/?p=1559 Questions to ask in an interview

When an interview ends. The interviewer will probably ask you if you have any questions. The general answer to this is a smile with a soft no. THAT IS COMPLETELY WRONG. You should not feel embarassed asking questions. In fact, you should definitely ask at least 1 question.

The real challenge here is to ask a question which is insightful but does not sound rude to the interviewer. Here are the top 6 questions to ask your interviewer once the interview ends.

BEST QUESTIONS TO ASK IN THE INTERVIEW

  1. What would you consider to be the most important aspects of this job?
  2. What are the skills and attributes you value most for someone being hired for this position?
  3. Could you describe a typical day or week in this position? The typical client or customer I would be dealing with? The expectations To determine how and when you will evaluated, Payne recommend advises asking:
  4. What are the performance expectations of this position over the first 12 months?
  5. What types of skills do you NOT already have onboard that you’re looking to fill with a new hire? The department asking about your department’s workers and role in the company can help you understand more about the company’s culture and hierarchy.
  6. What is the overall structure of the company and how does your department fit the structure?

Remember to sound polite and your chances to get that job will increase for sure!

More Knowledge

If you are completely new to Linux, I would suggest that you go through the basic Linux program also known as RHCSA: Linux Training
If you are an expert with Linux, grow your career with RHCEAWSDevopsOpenstack or Openshift.

Questions to ask in an interview

When an interview ends. The interviewer will probably ask you if you have any questions. The general answer to this is a smile with a soft no. THAT IS COMPLETELY WRONG. You should not feel embarassed asking questions. In fact, you should definitely ask at least 1 question.

Course Provider: Organization

]]>
Basic Ansible Playbooks 2 https://networknuts.net/basic-ansible-playbooks-2/?utm_source=rss&utm_medium=rss&utm_campaign=basic-ansible-playbooks-2 Sun, 08 Dec 2019 16:12:21 +0000 https://networknuts.net/?p=1547 Some Basic Ansible Playbooks

As a system administrator, you will need to perform a lot of tasks on a regular basis. Instead of doing these tasks manually through the command line, you can use Ansible & Ansible Playbooks to automate your tasks which would make your work easier as well as efficient. Of course, you will first need to install and connect Ansible to the machines you need to manage but once you do that, these playbooks will make your work very easy. These are some basic ansible playbooks to help you get started!

Configuring yum repo on client side

One of the most basic tasks, you will need to is to know how to configure a basic yum repository on a remote machine to get the required packages. Below is a screenshot for creating a yum repository.

network nuts rhce

You do not need to write the ‘.repo’ extension with the file name since the module yum_repository will do it automatically for you. The rest of it is quite simple if you have done RHCSA Training. Here is the code for configuring yum repo on client side:

---
- name: configure a sample yum repo
  hosts: 192.0.0.6
  tasks:
    - name: install example repo
      yum_repository:
        file: example
        name: example-internal
        description: networknuts yum repo
        baseurl: http://material.example.com/yum/repository/
        enabled: yes
        gpgcheck: no

Update all the packages on a remote machine

For updating all the packages on a remote machine, you’d generally give the command yum update but when you wish to do it through Ansible, you can simple use an asterisk (works as a wildcard) which will automatically update all the packages to the latest version.

network nuts rhce

Here is the code for the updating all packages on a remote machine:

---
- name: update all packages
  hosts: all
  tasks:
          - name: update all pkgs
            yum:
                    name: '*'
                    state: latest

Create a group and then put the user in that group

This is a very simply playbook which will really help you as a system administrator since user management is one of the most recurring tasks in your job profile. This playbook will create a group and then a user which will also be made a part of said group.

network nuts rhce

As you can see, we have also given the user a custom UID and comment. This of course is optional. Here is the code for creating a group, a user and then putting the user into that group:

---
- hosts: all
  tasks:
    - name: create a group
      group:
        name: mygroup
        state: present
    - name: create a user in the group
      user:
        name: aryan123
        comment: "Aryan Srivastava"
        uid: 1998
        group: mygroup

Creating a sample Apache Server

A simple httpd server is probably the most crucial part of RHCE Training as well as of a system administrator’s job. It is very easy and can really help you a lot in the long run. This playbook is a little long but if you go through it, it is actually quite simple to understand.

network nuts rhce

The first step of course is to install httpd which is the package name of Apache, after this we create the index.html file in the location /var/www/html. Once we do this, we need to allow certain ports into our firewall service. The last step is to restart httpd or Apache service for the changes to take affect. Here is the code for the sample Apache server:

---
- name: configuring an apache server
  hosts: all
  tasks:
          - name: installing httpd package
            dnf:
                    name: httpd
                    state: present

          - name: configuring index.html file
            copy:
                    content: " Input content here "
                    dest: /var/www/html/index.html

          - name: adding http in firewalld
            firewalld:
                    service: http
                    permanent: true
                    enabled: true

          - name: adding https in firewalld
            firewalld:
                    service: https
                    permanent: true
                    enabled: true

          - name: restart httpd
            service:
                    name: httpd
                    state: restart

These are the some of the most essential playbooks to use with Ansible in a Linux environment. You can read my previous entry for essential playbooks here.

More Knowledge

If you are completely new to Linux, I would suggest that you go through the basic Linux program also known as RHCSA: Linux Training
If you are an expert with Linux, grow your career with RHCEAWSDevopsOpenstack or Openshift.

]]>
Basic Ansible Playbooks https://networknuts.net/basic-ansible-playbooks/?utm_source=rss&utm_medium=rss&utm_campaign=basic-ansible-playbooks Tue, 03 Dec 2019 05:06:48 +0000 https://networknuts.net/?p=1508 Basic Ansible Playbooks needed by any System Administrator in their day to day tasks.

As a system administrator, you will need to perform a lot of tasks on a regular basis. Instead of doing these tasks manually through the command line, you can use Ansible & Ansible Playbooks to automate your tasks which would make your work easier as well as efficient. Of course, you will first need to install and connect Ansible to the machines you need to manage but once you do that, these playbooks will make your work very easy.

Installing a package

One of the most required and basic tasks, a system administrator should be aware of how to install packages. Instead of manually connecting to remote machines and installing the package on them, you can do so by simply running this playbook.

Basic ansible playbooks network nuts rhce
Installing a package

In the above screenshot, you can see that we are using the dnf module since it has replaced yum in RHEL 8. It is also recommended that you use the dnf command instead of the yum command in your RHCSA & RHCE Training. Simply use the same playbook, just replace name: httpd with the name of your desired package. Here is also the code for this playbook so you can quickly copy paste it:

---
- name: install a particular package
  hosts: all
  tasks:
          - name: install apache
            dnf:
                    name: httpd
                    state: present                                 

Restart a service

Restarting a service is important to make your desired changes visible. For example, you will have to restart httpd once you make some changes in the /var/www/html/ directory. You might also need to start or stop a service on multiple machines all together. This you can do use the service module in Ansible which will start, restart or stop a service on multiple machines simultaneously.

Basic ansible playbooks network nuts rhce
Restarting a service

In the above screenshot, you can see that we have simply given the service module two parameters, the name of our service and the state we want it in. If you want to start or stop the service, you can use the started or stopped keyword respectively to tell Ansible to do so. Yes, it uses past tense. Here is the code for this playbook:

---
- name: service module
  hosts: all
  tasks:
          - name: restart httpd
            service:
                    name: httpd
                    state: restarted

Replace a particular line in a file

Imagine you have multiple machines which have a particular file on which you need to replace a particular line, sounds like a nightmare doesn’t it? You can use the lineinfile module as given below.

Basic ansible playbooks network nuts rhce
Replacing a line in file

Just change the path to the file location, regexp will contain the new line and line will contain the line that needs to be replaced. Run this playbook and voila! Your work is done. Here is the code for this playbook:

---
- name: lineinfile module
  hosts: all
  tasks:
          - name: using lineinfile module
            lineinfile:
                    path: /tmp/myfile2
                    regexp: 'rhce training'
                    line: rh294 training
                    state: absent

Creating a user with a password

Creating a user is probably the most boring task a system administrator has to do, nevertheless doing it on multiple machines would be such a pain. If you use the playbook, you can do the same with a single click.

Basic ansible playbooks network nuts rhce

ust replace name: enterusename with the required username and mypassword with the password that you want to set. RHEL 8 uses SHA512 as the algorithm to encrypt passwords for users so we will use the same. Here is the code for this playbook:

---
- name: user creation
  hosts: all
  tasks:
    - name: creating a user with a defined password
      user:
                name: enterusername
                password: "{{ mypassword | password_hash('sha512') }}"
                state: present

More Knowledge

If you are completely new to Linux, I would suggest that you go through the basic Linux program also known as RHCSA: Linux Training
If you are an expert with Linux, grow your career with RHCE, AWS, Devops, Openstack or Openshift.

]]>
Understanding Vertical Scaling Horizontal Scaling https://networknuts.net/understanding-vertical-scaling-horizontal-scaling/?utm_source=rss&utm_medium=rss&utm_campaign=understanding-vertical-scaling-horizontal-scaling Mon, 02 Dec 2019 12:33:59 +0000 https://networknuts.net/?p=1500 Understanding Vertical Scaling Horizontal Scaling

Scaling is about allocating resources for an application and managing those resources efficiently to minimize contention. The user experience (UX) is negatively impacted when an application requires more resources than are available.

What is Scalability?

The scalability of an application is a measure of the number of users it can effectively support at the same time. The point at which an application cannot handle additional users effectively is the limit of its scalability.

Scalability reaches its limit when a critical hardware resource runs out, though scalability can sometimes be extended by providing additional hardware resources.

The hardware resources needed by an application usually include:

  1. CPU
  2. Physical memory
  3. Hard Disk (capacity and throughput i.e SATA vs SSD)
  4. Network Bandwidth

Physical Server or Virtual Machine – It doesn’t matter

An application runs on multiple nodes, which have hardware resources. Application logic runs on compute nodes and data is stored on data nodes. There are other types of nodes, but these are the primary ones. A node might be part of a physical server (usually a virtual machine), a physical server, or even a cluster of servers, but the generic term
node is useful when the underlying resource doesn’t matter.

Usually it doesn’t matter.

The manner in which we add these resources defines which of two scaling approaches we take.

• To vertically scale up is to increase overall application capacity by increasing the resources within existing nodes.
• To horizontally scale out is to increase overall application capacity by adding nodes.

Take this Example:

Increasing Capacity of Roadways

Consider a road for automobile travel. If the road was unable to support the desired volume of traffic, we could improve matters in a number of possible ways.

Option #1 – One improvement would be to upgrade the road materials (“the hardware”) from a dirt road to pavement to support higher travel speeds. This is vertically scaling up; the cars and trucks (“the software”) will be able to go faster.

Option #2 – Alternatively, we could widen the road to multiple lanes. This is horizontally scaling out; more cars and trucks can drive in parallel. And of course we could both upgrade the road materials and add more lanes, combining scaling up with scaling out.

understanding vertical scaling vs horizontal scaling

Vertically Scaling Up

Vertically scaling up is also known simply as vertical scaling or scaling up. The main idea is to increase the capacity of individual nodes through hardware improvements. This might include adding memory, increasing the number of CPU cores, or other single node changes.

There are no guarantees that sufficiently capable hardware exists or is affordable. And once you have the hardware, you are also limited by the extent to which your software is able to take advantage of the hardware.

Because hardware changes are involved, usually this approach involves downtime.

Horizontally Scaling Out

Horizontally scaling out, also known simply as horizontal scaling or scaling out, increases overall application capacity by adding entire nodes. Each additional node typically adds equivalent capacity, such as the same amount of memory and the same CPU.

The architectural challenges in vertical scaling differ from those in horizontal scaling; the focus shifts from maximizing the power of individual nodes to combining the power of many nodes.

Homogeneous Nodes

When all the nodes supporting a specific function are configured identically—same hardware resources, same operating system, same function-specific software—we say these nodes are homogeneous.

Horizontal scaling with homogeneous nodes is an important simplification. If the nodes are homogeneous, then basic round-robin load balancing works nicely, capacity planning is easier, and it is easier to write rules for auto-scaling. If nodes can be different, it becomes more complicated to efficiently distribute requests because more context is needed.

Scalability is a Business Concern

A speedy website is good for business. A Compuware analysis of 33 major retailers across 10 million home page views showed that a 1-second delay in page load time reduced conversions by 7%. Google observed that adding a 500-millisecond delay to page response time caused a 20% decrease in traffic, while Yahoo! observed a 400-millisecond delay caused a 5-9% decrease. Amazon.com reported that a 100-millisecond delay caused a 1% decrease in retail revenue. Google has started using website performance as a signal in its search engine rankings.

More Knowledge

Check out our other free articles on our blog.
New to Linux? Go for the Linux course.
Want to learn automation? Try our Master Devops Program.
Fresher who needs a job? Our Job Guarantee Program is perfect for you.
If you are an expert with Linux, grow your career with AWS, Devops, Openstack or Openshift.

]]>