Counting the Number of Files in a Folder with Efficiency

Managing folders with a massive number of files can be a daunting task, especially when you need to quickly assess how many files are contained within. Thankfully, there are efficient ways to tackle this challenge using command-line tools.

Using `ls` and `wc`

One approach is to leverage the combination of `ls` and `wc` commands. By navigating to the target directory and executing a couple of commands, you can obtain the file count promptly.

cd /path/to/folder_with_huge_number_of_files1
ls -f | wc -l

Here’s a breakdown of what each command does:

  • `ls -f`: Lists all files in the directory without sorting.
  • `wc -l`: Counts the number of lines output by `ls`.

This method efficiently calculates the total number of files within the specified directory.

Using Perl Scripting

Alternatively, Perl provides another powerful option for counting files within a directory. With a concise script, you can achieve the same result with ease.

cd /path/to/folder_with_huge_number_of_files2
perl -e 'opendir D, "."; @files = readdir D; closedir D; print scalar(@files)."\n";'

In this Perl script:

  • `opendir D, “.”`: Opens the current directory.
  • `@files = readdir D;`: Reads the contents of the directory into an array.
  • `closedir D;`: Closes the directory handle.
  • `print scalar(@files).”\n”;`: Prints the count of files.

Both methods provide efficient solutions for determining the number of files in a directory, catering to different preferences and workflows.

Next time you find yourself grappling with a folder overflowing with files, remember these handy techniques to streamline your file management tasks.

Linux IPTables: limit the number of HTTP requests from one IP per minute (for CentOs, RHEL and Ubuntu)

Protecting Your Web Server: Implementing IP-based Request Limiting with IPTables on Linux

In the face of relentless cyber attacks, safeguarding your web server becomes paramount. Recently, our server encountered a barrage of requests from a single IP address, causing severe strain on our resources. To mitigate such threats, we employed IPTables, the powerful firewall utility in Linux, to enforce restrictions on the number of requests from individual IPs.

IPTables Rule Implementation (For CentOS/RHEL)

In our case, the lifesaving rule we implemented using IPTables was:

-A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset

This rule effectively limits the number of simultaneous connections from a single IP address to port 80. Once the threshold of 20 connections is breached, any further connection attempts from that IP are rejected with a TCP reset.

To apply this rule, follow these steps:

  1. Edit IPTables Configuration File. Open the file `/etc/sysconfig/iptables` using your preferred text editor.
  2. Add the Rule. Insert the above rule above the line that allows traffic to port 80.
  3. Save the Changes. Save the file and exit the text editor.
  4. Restart IPTables Service. Execute the following command to apply the changes:
    # /sbin/service iptables restart
    

Upon completion, the IPTables service will be restarted, enforcing the new rule and restoring stability to your server.

Additional Example for Ubuntu Linux Distro

For Ubuntu Linux users, the process is slightly different. Below is an example of implementing a similar IPTables rule to limit requests from a single IP address on port 80:

sudo iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 20 -j REJECT --reject-with tcp-reset

This command accomplishes the same objective as the previous rule but is formatted for Ubuntu’s IPTables syntax.

Conclusion

In the ever-evolving landscape of cybersecurity, proactive measures like IP-based request limiting are crucial for safeguarding your web infrastructure. By leveraging the capabilities of IPTables, you can fortify your defenses against malicious attacks and ensure the uninterrupted operation of your services.

How to Identify IP Addresses Sending Many Requests in Ubuntu Linux

In today’s interconnected world, network security is paramount. One aspect of network security involves identifying and monitoring IP addresses that may be sending an unusually high volume of requests to your system. In Ubuntu Linux, several tools can help you accomplish this task effectively.

Using netstat

One of the simplest ways to identify IP addresses sending many requests is by using the `netstat` command. Open a terminal and enter the following command:

sudo netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -nr

This command will display a list of IP addresses that have initiated numerous TCP connections to your system, sorted by the number of connections.

Utilizing tcpdump

Another powerful tool for network analysis is `tcpdump`. In the terminal, execute the following command:

sudo tcpdump -nn -c 1000 | awk '{print $3}' | cut -d. -f1-4 | sort | uniq -c | sort -nr

This command will capture the last 1000 packets and display a list of IP addresses involved in those packets, sorted by packet count.

Monitoring with iftop

If you prefer a real-time view of network traffic, `iftop` is an excellent option. If you haven’t installed it yet, you can do so with the following command:

sudo apt install iftop

Once installed, simply run `iftop` in the terminal:

sudo iftop

`iftop` will display a live list of IP addresses sending and receiving the most traffic on your system.

By utilizing these tools, you can effectively identify IP addresses that may be engaging in suspicious or excessive network activity on your Ubuntu Linux system. Monitoring and promptly addressing such activity can help enhance the security and performance of your network environment.

Stay vigilant and keep your systems secure!

Linux OpenSSL generate self-signed SSL certificate and Apache web server configuration

In a previous post, we covered the creation of a CSR and key for obtaining an SSL certificate. Today, we’ll focus on generating a self-signed SSL certificate, a useful step in development and testing environments. Follow along to secure your website with HTTPS.

Generating the SSL Certificate

To create a self-signed SSL certificate, execute the following command:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout www.shkodenko.com.key -out www.shkodenko.com.crt

This command generates a self-signed certificate valid for 365 days.

Configuring Apache

Next step, let’s configure Apache to use the SSL certificate. Add the following configuration to your virtual host file:

<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName shkodenko.com
ServerAlias www.shkodenko.com
DocumentRoot /home/shkodenko/public_html
ServerAdmin webmaster@shkodenko.com

SSLEngine on
SSLCertificateFile /etc/ssl/certs/www.shkodenko.com.crt
SSLCertificateKeyFile /etc/ssl/private/www.shkodenko.com.key

CustomLog /var/log/apache2/shkodenko.com-ssl_log combined

<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>

<Directory /home/shkodenko/public_html>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
</IfModule>

This configuration sets up SSL for your domain, specifying the SSL certificate and key files.

Checking Syntax and Restarting Apache

Before restarting Apache, it’s crucial to check the configuration syntax:

apachectl -t

If the syntax is correct, restart Apache to apply the changes:

systemctl restart apache2

or

service apache2 restart

Ensure your website now loads with HTTPS. You’ve successfully generated a self-signed SSL certificate and configured Apache to use it!

Linux chkconfig and service: managing autostart and service state

In Red Hat-like Linux systems such as Red Hat Enterprise Linux, CentOS, Fedora (up to version 15), and similar distributions, service management often involves the use of the /sbin/chkconfig command.

To view the status of the NFS (Network File System) service, you can use the following command:

/sbin/chkconfig --list nfs

This command displays a list indicating whether the NFS service is enabled or disabled for each runlevel (0 through 6).

To enable the NFS service, execute:

/sbin/chkconfig nfs on

To verify the status of the NFS service, rerun the previous command:

/sbin/chkconfig --list nfs

Now, you can see that the NFS service is enabled for the appropriate runlevels.

To disable the NFS service from starting automatically, use:

/sbin/chkconfig nfs off

Check the status once more to confirm the changes:

/sbin/chkconfig --list nfs

To view the autoload status of all services on the system, use:

/sbin/chkconfig --list | more

For a comprehensive list of available command options, you can refer to the help documentation:

/sbin/chkconfig --help

Additionally, you can manage the NFS service directly using the `/sbin/service` command with various options:

/sbin/service nfs [start|stop|status|restart|reload|force-reload|condrestart|try-restart|condstop]

Some commonly used options include:

  • start: Start the service.
  • status: Check the current state of the service.
  • restart: Restart the service.
  • reload: Apply new configurations without restarting.

Trimming the Last Character from a String in Bash

Trimming the Last Character from a String in Bash.

In the world of shell scripting, manipulating string variables is a common task. One interesting challenge you might encounter is removing the last character from a string. This task might seem simple at first glance, but it showcases the flexibility and power of Bash scripting.

Let’s dive into a practical example to illustrate how this can be achieved efficiently.

Scenario

Imagine you have a string stored in a variable, and you need to remove the last character of this string for your script’s logic to work correctly. For instance, you might be processing a list of filenames, paths, or user inputs where the trailing character needs to be omitted.

Solution

Bash provides several ways to manipulate strings. One of the simplest and most elegant methods to remove the last character from a string is using parameter expansion. Here’s a quick script to demonstrate this approach:

#!/bin/bash

# Original string
str1="foo bar"
echo "String1: ${str1}"

# Removing the last character
str2="${str1%?}"
echo "String2: ${str2}"

In this script:

  • We define a string variable `str1` with the value “foo bar”.
  • We then use `${str1%?}` to create a new variable `str2` that contains all characters of `str1` except for the last one. The `%?` syntax is a form of parameter expansion that removes a matching suffix pattern. In this case, `?` matches a single character at the end of the string.

How It Works

The `${variable%pattern}` syntax in Bash is a form of parameter expansion that removes the shortest match of `pattern` from the end of `variable`. The `?` in our pattern is a wildcard that matches any single character. Thus, `${str1%?}` effectively removes the last character from `str1`.

Alternative Approaches

Although the method shown above is succinct and effective for our purpose, Bash offers other string manipulation capabilities that could be used for similar tasks. For example:

  • Substring Extraction: `echo “${str1:0:${#str1}-1}”`
  • sed: If you prefer using external tools, `sed` can also achieve this: `echo “$str1” | sed ‘s/.$//’`

Each method has its use cases, depending on the complexity of the operation you’re performing and your personal preference.

Conclusion

Removing the last character from a string in Bash is straightforward with parameter expansion. This technique is just one example of the powerful string manipulation capabilities available in Bash. Experimenting with these features can help you write more efficient and effective scripts.

Mastering Program Search in Linux: A Guide to Using whereis, find, and locate Commands

To locate a specific program in your system, the `whereis` command is often the most efficient choice. For instance, if you’re searching for the ‘ls’ program, simply enter:

whereis ls

This command will display results such as:

ls: /bin/ls /usr/share/man/man1p/ls.1p.gz /usr/share/man/man1/ls.1p.gz

Alternatively, you have other commands at your disposal, although they might be slower. One such option is the `find` command, which can be used as follows:

find / -type f -name "ls"

Another useful command is `locate`, which searches for any file names containing ‘ls’ in their path. The syntax is straightforward:

locate ls

However, `locate` tends to return a lengthy list of results, which can be challenging to navigate without filtering the output with tools like `grep` or using pagination commands such as `more` or `less`. This extra step is necessary for easier readability of the output.

Enhancing Laravel Controllers to Output Custom JSON Structures

Introduction

In the world of web development, especially when working with APIs, customizing the output of your controllers can significantly improve the readability and usability of your data. In this post, we’ll explore how to modify a Laravel controller to output a specific JSON format. This technique is particularly useful when dealing with front-end applications that require data in a certain structure.

The Challenge

Imagine you have an array in your controller that needs to be outputted as a JSON array of objects, but your current setup only returns a simple associative array. Let’s take the following requirement as an example:

Newly required JSON Format:

[
{ "id": 1, "name": "Low" },
{ "id": 2, "name": "Averate" },
{ "id": 3, "name": "High" }
]

Existing Controller Code:

<?php

namespace App\Http\Controllers\API\Task;


use App\Http\Controllers\API\BaseController;
use Illuminate\Http\Response;


class TaskPriorityController extends BaseController
{
    public static array $taskPriority = [
        1 => 'Low',
        2 => 'Average',
        3 => 'High',
    ];

    public function index()
    {
        return response()->json(self::$taskPriority);
    }
}

The Solution

To achieve the desired JSON output, we need to transform the associative array into an indexed array of objects. Here’s how we can do it:

Updated Controller Code:

<?php

namespace App\Http\Controllers\API\Task;


use App\Http\Controllers\API\BaseController;
use Illuminate\Http\Response;


class TaskPriorityController extends BaseController
{
    public static array $taskPriority = [
        1 => 'Low',
        2 => 'Average',
        3 => 'High',
    ];

    public function index()
    {
        $formattedTaskPriorities = array_map(function ($key, $value) {
            return ['id' => $key, 'name' => $value];
        }, array_keys(self::$taskPriority), self::$taskPriority);

        return response()->json(array_values($formattedTaskPriorities));
    }
}

In this solution, we used PHP’s `array_map` function. This function applies a callback to each element of the array, allowing us to transform each key-value pair into an object-like array. We then use `array_keys` to pass the keys of the original array (which are our desired IDs) to the callback function. Finally, `array_values` ensures that the JSON output is an indexed array, as required.

Conclusion

Customizing the JSON response of a Laravel controller is a common requirement in modern web development. By understanding and leveraging PHP’s array functions, you can easily format your data to meet the needs of your application’s front end. This small change can have a significant impact on the maintainability and readability of your code, as well as the performance of your application.

Additional Tips

  • Always test your endpoints after making changes to ensure the output is correctly formatted.
  • Consider the scalability of your solution; for larger data sets, you might need to implement more efficient data handling techniques.

How to Implement Automatic Logout in Linux Bash Session After 5 Minutes of Inactivity

If you’re looking to enhance the security of your Linux system, setting up an automatic logout for the bash session after a period of inactivity is a great step. Here’s how to implement a 5-minute timeout policy:

  1. Set the Timeout Policy: Open the `~/.bash_profile` or `/etc/profile` file in your preferred text editor. Add the following lines to set a 5-minute (300 seconds) timeout:
    # Set a 5 min timeout policy for bash shell
    TMOUT=300
    readonly TMOUT
    export TMOUT
    

    This code sets the `TMOUT` variable to 300 seconds. The `readonly` command ensures that the timeout duration cannot be modified during the session, and `export` makes it available to all shell sessions.

  2. Disabling the Timeout: If you need to disable the automatic logout feature, you can do so by running one of the following commands:
    1. To temporarily disable the timeout for your current session:
      # Disable timeout for the current session
      export TMOUT=0
      
    2. Or, to remove the `TMOUT` setting completely from your session:
      # Unset the TMOUT variable
      unset TMOUT
      
  3. Important Considerations: It’s crucial to note that the `readonly` attribute can only be reset by the root (administrator) user. This can be done either in the global bash configuration file (`/etc/profile`) or in a user’s custom bash configuration file (`~/.bash_profile`).

By following these steps, you can effectively manage the automatic logout feature for your Linux bash sessions, enhancing the security and efficiency of your system.