October 17, 2024

154news

Latest Hot News

January 30, 2024 | SparkyLinux

“Introducing Sparky Bonsai: The Latest SparkyLinux Portable Edition with Joe’s Window Manager – Get the Scoop from Softpedia News”

npressfetimg-247.png

Introducing Sparky Bonsai: The Revolutionary Portable Debian-Based Operating System
Sparky Bonsai is shaking up the world of GNU/Linux operating systems with its new revolutionary community edition. This innovative system can be easily run and used directly from a USB stick, without the need for any installation on your personal computer. Say goodbye to traditional bulky OS installations and experience the convenience of Sparky Bonsai’s portable and lightweight approach.
Experience Ultimate Versatility
While many Linux distros offer live versions for testing, Sparky Bonsai takes it to the next level with its USB flash drive compatibility. This means you can use it on any computer without needing to install the actual OS onto the hard drive. Enjoy the freedom of being able to take your operating system with you wherever you go, with the added bonus of persistence.
Minimalism at Its Finest
Say goodbye to bloated and complicated user interfaces, Sparky Bonsai keeps things simple with the lightweight JWM (Joe’s Window Manager) stacking window manager. Its minimalist approach extends to its choice of applications as well, with only the essential tools needed for everyday use. But don’t be fooled by its simplicity, as Sparky Bonsai still packs a punch with the latest Debian GNU/Linux 10 “Buster” base.
Powerful Features Under the Hood
Despite its minimalistic appearance, Sparky Bonsai is packed with powerful features. It runs on the stable Linux 4.19.0.6 kernel and utilizes Debian Buster’s modules for Porteus boot and AUFS support. With the speedy Pale Moon web browser, efficient Mousepad text editor, and versatile LXterminal at your fingertips, you’ll have all the tools you need to conquer any task. Plus, with Synaptic as the default package manager, installing new software is a breeze.
Join the Sparky Bonsai Revolution
Don’t just take our word for it, join the Sparky Bonsai revolution and experience the future of portable operating systems. Available for download now, and with support from the SparkyLinux community, there’s never been a better time to make the switch. Say hello to simplicity, flexibility, and unmatched convenience with Sparky Bonsai.

Source: https://news.google.com/rss/articles/CBMie2h0dHBzOi8vbmV3cy5zb2Z0cGVkaWEuY29tL25ld3MvbWVldC1zcGFya3ktYm9uc2FpLXNwYXJreWxpbnV4LXBvcnRhYmxlLWVkaXRpb24tZmVhdHVyaW5nLWpvZS1zLXdpbmRvdy1tYW5hZ2VyLTUyODU0Mi5zaHRtbNIBAA?oc=5

January 30, 2024 | SparkyLinux

“Experience the Power of SparkyLinux 6.6 – Download Now! – Nagorik TV”

npressfetimg-246.png

Debian-Based Linux Operating System, SparkyLinux, Gets a Major Upgrade with Sparky 6.6 “Po Tolo” Release
Discover the Powerful Features of the Latest Update and How to Get It Now

SparkyLinux, the highly acclaimed Debian-based Linux distribution, has just released its newest version – Sparky 6.6 “Po Tolo”. This update, based on Debian 11, comes packed with game-changing features and enhancements that are sure to elevate your computing experience.

But that’s not all. Sparky 6.6 also introduces a groundbreaking feature that lets you run the system directly from a USB flash drive while saving your work. Now that’s what we call convenience on the go!

Let’s take a closer look at the impressive list of features and upgrades that come with Sparky 6.6:

– Upgraded systems from Debian and Sparky stable repositories, updated as of February 4, 2023
– Linux Kernel 5.10.166 LTS, with the option to install Linux Kernel 6.1 from Sparky unstable repositories
– Compatibility with ARM systems, featuring Linux Kernel 5.15.84-v7+
– Advanced Mozilla build with Firefox 102.7.0.0esr, available as “firefox-sparky” in Sparky repositories
– Updated Thunderbird email client to version 102.6.0
– Enhanced productivity with LibreOffice 7.0.4
– Sleeker desktop environments, with LXQt 0.16.0, Xfce 4.16, Openbox 3.6.1, and KDE Plasma 5.20.5
– Small but significant tweaks for smoother performance

It’s worth noting that the new persistence feature is only compatible with SparkyLinux 6.6 or later versions. So if you’re still using an older version, now is the perfect time to upgrade and take full advantage of this game-changing feature.

Upgrading to the latest version is as easy as typing one line of code in the terminal: “sparky -upgrade”. And with multiple editions to choose from, including LXQt, KDE Plasma, Xfce, and minimal GUI and text mode versions, Sparky 6 is accessible for a wide range of users. It’s available for amd64, i686, and armhf systems, making it suitable for all your computing needs.

Ready to experience the power of Sparky 6.6 for yourself? Simply click on the link provided to download and discover all the amazing features. And don’t forget to subscribe to our newsletter for the latest Linux news and updates from the top tech industries. Don’t miss out on the hottest trends in the world of Linux – subscribe now!

Source: https://news.google.com/rss/articles/CBMiSGh0dHBzOi8vd3d3LmNsb3VkaG9zdG5ld3MuY29tL3NwYXJreWxpbnV4LTYtNi1ub3ctYXZhaWxhYmxlLXRvLWRvd25sb2FkL9IBAA?oc=5

January 30, 2024 | MX Linux

Introducing MX Linux 21.3 “Wildflower”: Enhanced Hardware Compatibility, Powered by Debian 11.6 and Xfce 4.18 – Neowin Report Welcome to the newest edition of MX Linux, the “Wildflower” release! Boasting an unparalleled combination of hardware compatibility and the latest software updates, MX Linux 21.3 is the perfect choice for those seeking a reliable and high-performing operating system. With the powerful Debian 11.6 base and the sleek Xfce 4.18 desktop environment, users can expect a seamless and efficient experience. Don’t just take our word for it, see for yourself why MX Linux is making waves in the tech world. Check out the full report on Neowin.

npressfetimg-245.png

Introducing MX Linux 21.3 “Wildflower”: The Latest and Greatest Linux Distro Based on Debian 11.6!

MX Linux, known for its impressive features and user-friendly experience, has just released their latest version, MX Linux 21.3 “Wildflower”. This release brings significant updates and improvements, solidifying its position as the top choice for Linux users.

With a base on Debian 11.6, MX Linux 21.3 offers a stable, reliable foundation for all your computing needs. It also comes with Xfce 4.18, which was released in December. And the best part? MX-21 users can easily update to this latest version without having to reinstall the entire system.

But what makes MX Linux stand out? Well, for starters, it consistently ranks first on Distrowatch’s page hit rankings. And it’s no surprise why – MX Linux offers Xfce, KDE, and Fluxbox editions, catering to a wide range of preferences. Plus, it’s designed to provide an excellent feature set without slowing down your computer’s performance.

This update introduces a range of new features and improvements, including the advanced hardware support (AHS) enabled release for KDE users, a new mx-rofi-manager tool for Fluxbox, and the updated antiX live/remaster system. And to make things even more convenient, all releases now come with the menulibre menu editor.

If you’re looking to try out the latest and greatest from MX Linux, head over to the project’s downloads page. And for those who want the core MX Linux experience, the MX-21.3_x64 Xfce edition is the perfect choice. But don’t worry, there are also 32-bit versions available for older systems.

Experience the power, stability, and user-friendliness of MX Linux 21.3 “Wildflower” now. Don’t just take our word for it – give it a try and see for yourself why it’s the top choice for Linux enthusiasts around the world!

Source: https://news.google.com/rss/articles/CBMicGh0dHBzOi8vd3d3Lm5lb3dpbi5uZXQvbmV3cy9teC1saW51eC0yMTMtd2lsZGZsb3dlci1yZWxlYXNlZC1kZWJpYW4tMTE2LWJhc2UteGZjZS00MTgtaW1wcm92ZWQtaGFyZHdhcmUtc3VwcG9ydC_SAW9odHRwczovL3d3dy5uZW93aW4ubmV0L2FtcC9teC1saW51eC0yMTMtd2lsZGZsb3dlci1yZWxlYXNlZC1kZWJpYW4tMTE2LWJhc2UteGZjZS00MTgtaW1wcm92ZWQtaGFyZHdhcmUtc3VwcG9ydC8?oc=5

January 30, 2024 | MX Linux

“Experience the Power of MX Linux 19 Beta 1 – Download the Debian-Based Operating System Today at BetaNews.”

npressfetimg-244.png

Introducing the Wildly Popular and Revolutionary MX Linux 19 Beta 1 – a Game Changer for Open Source Operating Systems

Witness the launch of MX Linux 19 Beta 1, the highly anticipated operating system that is taking the Linux community by storm. Buckle up as the BetaNews team walks you through its cutting-edge features and enhancements that are setting the bar high for Linux distributions. Join the ranks of avid supporters and dive into the world of MX Linux, the ultimate choice for a seamless and efficient user experience. Don’t miss out on the chance to be a part of this groundbreaking release. Download MX Linux 19 Beta 1 now and revolutionize your computing experience.

Source: https://news.google.com/rss/articles/CBMiOGh0dHBzOi8vYmV0YW5ld3MuY29tLzIwMTkvMDgvMjYvbXgtbGludXgtMTktYmV0YS1kZWJpYW4v0gEA?oc=5

January 30, 2024 | 154news

Top 11 Linux certifications – TechTarget

npressfetimg-243.png

Linux certifications test your ability to deploy and configure a Linux system in a business context. These certifications range from vendor-specific to distribution-agnostic. Several certification vendors provide specialization paths that enable candidates to pursue specific skill sets that match their job roles.

IT professionals use certifications to add to their resumes to prove their knowledge and supplement their experience. Certifications and training also provide an entryway for those just beginning their IT career. Sys admins experienced with other OSes might also wish to broaden their knowledge by adding Linux to their expertise.

This article focuses on the top Linux certifications that can benefit IT personnel.

1. CompTIA Linux+

CompTIA’s most current Linux+ certification is a vendor-agnostic approach to learning Linux. It covers how to work with the command line, manage storage, use applications, installation and networking. Linux+ supplements these skills with containers, SELinux security and GitOps. This certification is valid for three years.

  • Prerequisites: None.
  • Number of exams: One.
  • Cost of exam: $358.
  • Minimum passing score: 720/900 points or 80%.
  • Study materials: Self-paced, vendor, third-party.

2. Red Hat Certified System Administrator (RHCSA)

The RHCSA is usually the first Red Hat certification goal for Red Hat Enterprise Linux administrators. It covers essential maintenance, installation, configuration and networking. This certification provides a hands-on command-line experience.

Red Hat certification exams are entirely performance-based. The exams provide one or more VMs to accomplish a list of tasks. Configure tasks correctly to pass the exam.

  • Prerequisites: None.
  • Number of exams: One.
  • Cost of exam: $400-500.
  • Minimum passing score: 70%.
  • Study materials: Self-paced, vendor, third-party.

3. Red Hat Certified Engineer (RHCE)

The RHCE builds on the RHCSA objectives by covering topics like users and groups, storage management and security. The most crucial subject for RHCE candidates is automation, which has a heavy emphasis on Ansible.

This certification exam is task-driven. It uses a set of requirements and VMs to validate your abilities.

  • Prerequisites: RHCSA.
  • Number of exams: One.
  • Cost of exam: $400-500.
  • Minimum passing score: 70%.
  • Study materials: Self-paced, vendor, third-party.

4. Red Hat Certified Architect (RHCA)

RHCA candidates must pass a combination of five Red Hat exams. Red Hat provides an extensive list of valid certifications. This offers flexibility for administrators to match their knowledge to their job skills. There are two areas of emphasis: infrastructure and enterprise applications.

The RHCA certification is Red Hat’s highest recognized credential.

  • Prerequisites: Five supporting certification exams.
  • Number of exams: Zero.
  • Cost of exam: Sum of the five chosen prerequisite exams.
  • Minimum passing score: Varies per chosen prerequisite exams.
  • Study materials: Self-paced, vendor, third-party.

5. Linux Foundation Certified System Administrator (LFCS)

The Linux Foundation offers a wide variety of distribution-neutral certifications that cater to Linux generalists and those needing more specialized skills. The Linux Foundation has retired the Linux Foundation Certified Engineer certification in favor of topics that better align with job roles.

The LFCS is the Foundation’s primary certification, acting as a springboard for more topic-specific exams. It covers basic deployment, networking, storage, essential commands and user management. The Linux Foundation offers other specialized certifications, like container management with Kubernetes and cloud administration.

  • Prerequisites: None.
  • Number of exams: One.
  • Cost of exam: $595
  • Minimum passing score: 67%
  • Study materials: Self-paced, vendor, third-party.

6. Linux Professional Institute LPIC-1

The Linux Professional Institute (LPI) offers distribution-neutral certifications that emphasize day-to-day administrative tasks. LPI offers a wide selection of certifications, but their general sys admin exams remain the most popular.

The LPIC-1 tests your skills in system maintenance, architecture, file security, system security and networking. This certification is a stepping-stone to more advanced LPI exams. It is valid for five years.

  • Prerequisites: None.
  • Number of exams: Two.
  • Cost of exam: $200.
  • Minimum passing score: 500/800 points or 62.5%.
  • Study materials: Self-paced, vendor, third-party.

7. Linux Professional Institute LPIC-2

The LPIC-2 builds on the LPIC-1 skills by adding advanced network, system configuration and deployment topics. Unlike other certifications, it includes information on data center management and automation. This certification requires you to hold the LPIC-1 certification. LPI recognizes this certification for five years.

  • Prerequisites: LPIC-1.
  • Number of exams: Two.
  • Cost of exam: $200.
  • Minimum passing score: 500/800 points or 62.5%.
  • Study materials: Self-paced, vendor, third-party.

8. Linux Professional Institute LPIC-3

LPI offers four specializations at the LPIC-3 certification level. This level is designed for enterprise-class Linux administration tailored to specific job roles. Passing any one specific exam awards you the related LPIC-3 certification. The specializations include the following:

  • Mixed Environments.
  • Security.
  • Virtualization and Containerization.
  • High Availability and Storage Clusters.

Unlike LPIC-1 and LPIC-2, there is only one exam for each LPIC-3 specialization. However, you must hold the LPIC-1 and LPIC-2 certifications.

  • Prerequisites: LPIC-1 and LPIC-2.
  • Number of exams: One.
  • Cost of exam: $200.
  • Minimum passing score: 500/800 points or 62.5%.
  • Study materials: Self-paced, vendor, third party.

9. Oracle Linux 8 System Administrator

Oracle’s Linux distribution is an evolution of Red Hat Linux with additional utilities and applications. The certification validates an administrator’s system deployment, maintenance and monitoring skills. It is the foundation for more advanced Oracle Linux certifications that cover topics from cloud management to middleware.

  • Prerequisites: None.
  • Number of exams: One.
  • Cost of exam: $245.
  • Minimum passing score: 60%.
  • Study materials: Self-paced, vendor, third-party.

10. SUSE Certified Administrator (SCA)

Those who work with the SUSE Linux Enterprise Server (SLES) 15 distribution begin their certification journey with the SCA exam. The objectives cover basic topics SLES administrators should know, including file system management, command line tasks, using Vim, software, networking, storage and monitoring. This certification has no prerequisites and is designed for beginning SUSE administrators.

  • Prerequisites: None.
  • Number of exams: One.
  • Cost of exam: $149.
  • Minimum passing score: 70%.
  • Study materials: Self-paced, vendor, third-party.

11. SUSE Certified Engineer (SCE)

An SCE possesses similar skills to the SCA. SCE provides advanced administration capabilities, including scripting, encryption, storage, networking and configuration management. The certification is built around SUSE’s own Linux Enterprise Server 15.

  • Prerequisites: SCA.
  • Number of exams: One.
  • Cost of exam: $195.
  • Minimum passing score: 70%.
  • Study materials: Self-paced, vendor, third party.

Choose the right certification

To help choose which certifications are best for you, consider what Linux distribution your current employer uses and pursue a related exam path. These exams might include Red Hat, SUSE or Oracle certifications. If your organization uses multiple distributions, consider a vendor-agnostic option, such as CompTIA, LPI or the Linux Foundation.

It might be of interest to mix a couple of distribution-neutral certifications with vendor-specific ones. For example, adding CompTIA’s Linux+ certification to your Red Hat CSA knowledge will help you better understand the benefits other distributions might provide in your Red Hat environment.

Choose certifications that fit your current or future job role. Strongly consider pursuing the advanced certifications offered by Red Hat, LPI and others that focus on specific areas of the industry, such as cloud, containerization or configuration management.

Source: https://news.google.com/rss/articles/CBMiVmh0dHBzOi8vd3d3LnRlY2h0YXJnZXQuY29tL3NlYXJjaGRhdGFjZW50ZXIvdGlwL1RvcC01LW9wdGlvbnMtZm9yLUxpbnV4LWNlcnRpZmljYXRpb25z0gEA?oc=5

January 30, 2024 | 154news

Linux Patch Management: Tools, Issues & Best Practices – eSecurity Planet

npressfetimg-242.png

eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Compared to other operating systems, Linux patch management is unique because of its open-source nature, which enables a sizable community of developers and security professionals to find vulnerabilities, examine the code, and submit patches.

Linux distributions use package managers to make it easier for users to install software packages and updates. These packages automate the download, installation, and dependency resolution process, which simplifies the process of patch application. While popular Linux distributions can be as easy as Windows to update, many enterprises and organizations prefer to test patches and manage their distribution, creating many of the same issues that admins face with closed-source operating systems.

Here we’ll discuss how patch management works on Linux, best practices, and the best patch management tools for Linux.

Featured Patch Management Software

How Patching on Linux Is Different From Other Systems

When comparing patch management between Linux and proprietary operating systems like Windows and macOS, there are several key differences to consider.

The open-source nature of Linux means its source code is freely available and fosters a large community of security researchers and developers who actively contribute to vulnerability discovery, patch development and distribution. Package managers make it easier to update software packages and apply updates and patches, and different Linux distributions have their own patching processes and practices. The Linux community actively participates in identifying bugs, developing patches, and testing them, resulting in a collaborative and comprehensive patching process. Organizations using Linux in critical applications will typically have experienced administrators who can apply patches and updates, either manually, via package manager, or automatically.

With closed-source operating systems like Windows, security researchers and others might find and submit bugs to Microsoft, but the source code is not freely accessible and patch development is primarily managed by Microsoft, with Windows Update the mechanism that delivers patches and updates to users. Microsoft releases patches on Patch Tuesday, a scheduled monthly update release. Microsoft performs extensive testing on patches before releasing them to the public.

MacOS and iOS are also a closed ecosystem, with Apple maintaining complete control over the software and patching process. Apple releases patches and updates for macOS and iOS devices periodically, usually alongside new feature releases. The company typically places a high priority on security and swiftly addresses vulnerabilities through patches.

Linux’s open-source nature, package managers, community involvement, and flexibility create its own patch management cadence and challenges, with admins handling patches and updates, sometimes via command line. Microsoft and Apple can automatically update devices for users, although admins sometimes prefer to test and manage updates to ensure that everything works smoothly. Windows focuses on regular cumulative updates and extensive testing, while macOS and iOS emphasize a closed ecosystem and controlled release schedule with a strong emphasis on security.

Each operating system has its own unique approach to patch management based on their underlying philosophies and target audiences. There are patch management tools that can help manage all those operating systems and more; we’ll get to those later on.

Also read:

Common Linux Patch Management Issues

Patching all the devices and software in an organization can be challenging regardless of the underlying operating system. While some of these issues also apply to closed-source operating systems, Linux can pose unique patch management challenges because of its open source nature, wide range of distros, and its use in critical applications.

1. Lack of centralized repository for updates

While Linux distros often have a feed and devices can simply receive patches, the absence of a centralized repository for updates can make it challenging for an organization to efficiently distribute and manage patches across multiple systems and distros. To fix this, organizations can set up a local package repository using tools like apt-mirror or Spacewalk. This can be used to store and distribute updates to all relevant systems and provide a centralized location for patch management.

2. Time-consuming patching process 

Patching on Linux systems can be time consuming, especially when dealing with a large quantity of endpoints. Manually updating each system can be tedious, complex and prone to human error. There are automation tools that can be employed to significantly reduce the time required for patching. These tools allow for the central management and deployment of patches across multiple systems simultaneously, streamlining the process and saving time.

3. Downtime

Patching Linux systems usually requires restarting or rebooting the entire system, and this can result in downtime. As Linux is widely used in enterprise and cloud environments, this can cause a problem for critical systems that need to remain operational and running. To minimize downtime, organizations can implement techniques like live patching that allows patches to be applied without the need for system restarts or reboots. In addition, utilizing high-available setups and load balancing ensure that services remain accessible during the active patching process.

4. Missing endpoint visibility

Having limited visibility into the status of endpoints can make it difficult to assess the patching status of each system. Implementing centralized monitoring solutions can help provide real-time visibility into patching status of all endpoints. These tools can generate alerts and reports that enable administrators to identify and address any systems that are not properly patched.

5. Difference of systems and applications

Linux environments often consist of various distributions, versions, and configurations that lead to differences in systems and applications. This diversity can make it challenging to apply patches uniformly across endpoints. One potential solution is to standardize the Linux distributions and configurations within the organization. By minimizing the number of variations, it becomes easier to test and apply patches consistently. Additionally, leveraging configuration management tools can help maintain a standardized environment and simplify the patching process.

6. Scattered endpoints due to remote or hybrid work

Remote and hybrid work setups will result in scattered endpoints across different locations, adding to patching difficulty. Employing a remote management solution can help manage and patch systems regardless of the physical location of the device.

7. Automating patches can be challenging

Performance difficulties on the target systems might occasionally result from resource-intensive automated patching procedures. Administrators can reduce the effect on system resources by scheduling patching tasks at off-peak times. Additionally, streamlining the patching process and adopting automation technologies that are resource-friendly will help save resources and provide a more seamless patch rollout.

Also read: 11 Key Steps of the Patch Management Process

Debian

Debian, the mother of various Linux distributions like Deepen, Ubuntu and Mint, employs a unique approach to patch management that sets it apart from the others. Debian’s patch management revolves around its robust package management system, known as Advanced Package Tool (APT). This system handles the retrieval, installation, and maintenance of software packages, including patches and updates. Debian’s development process focuses on thorough testing and quality assurance, aiming to deliver reliable and stable software to its users. The Debian community places a strong emphasis on transparency and collaboration, and Debian maintains a dedicated security team responsible for promptly addressing vulnerabilities and providing security patches. Debian’s long-standing history and reputation as a reliable and stable distribution have contributed to its meticulous approach to patch management.

Ubuntu

Ubuntu is a popular Linux distribution that has built-in patch management mechanisms. It uses the Advanced Package Tool (APT) as its package management system, which provides regular updates and security patches throughout its official repositories. The Ubuntu Update Manager, AKA update manager, provides a graphical interface for managing updates and patches. Additionally, the Ubuntu Security Notices (USN) system provides detailed security advisories and alerts about vulnerabilities and corresponding patches. Ubuntu’s patch management approach focuses on delivering timely updates, security patches and bug fixes to ensure the stability and security of the system. It provides a user-friendly interface and command-line tools to simplify the process of managing and applying patches, making it accessible to both beginners and experienced users.

Gentoo

The patch management mechanism used by Gentoo Linux is distinct from the conventional patch management techniques used by other distributions. Gentoo uses a rolling-release paradigm, in which users receive updates and packages that are regularly updated. It makes use of Portage, a package management system that enables users to compile and modify applications from source code. The Gentoo community maintains a sizable collection that contains Gentoo-specific fixes for security flaws, defects, and the addition of functionality. Through the emerge command, Gentoo also offers a powerful mechanism for handling package updates and security fixes.

Compared to centralized patch management solutions, Gentoo’s patch management system takes more manual work, but it also gives a high level of flexibility and control over the patching process. Users can apply patches where they can tailor software configurations and fine-tune their system according to their needs.

Linux Patch Management Best Practices

There are a number of best practices that can help admins manage the Linux update process. Many of these also apply to closed-source operating systems and applications. See Patch Management Best Practices & Steps for an in-depth guide to patch management.

Monitor and update regularly

Monitor any Linux distribution you use for security advisories, bug fixes, and patches, and update promptly. It’s essential to keep Linux distributions and applications — or any software, for that matter — updated to stay safe from known vulnerabilities. Regular monitoring makes sure that any security advisories, bug fixes, or patches are quickly recognized and deployed.

Prioritize patches

Assess the risk and impact of applying patches to ensure security and system stability. Patch deployment priorities are determined by assessing possible risks and effects of each patch. By concentrating on crucial patches that fix serious flaws or have a significant influence on system stability, system administrators may make sure that resources are used effectively and that possible disruptions are kept to a minimum.

Test patches

Test patches in a controlled environment to identify conflicts, compatibility issues, or unexpected behavior. Patches should be tested to find any conflicts, incompatibilities, or unexpected behavior before applying them to the live system. By properly validating and verifying the system during this testing step, the chance of adverse effects on system functioning or performance is decreased.

Establish patching policies and procedures

A patch management policy is critical for any organization regardless of operating systems or application, and policy and procedures should define roles and responsibilities too. Clear, written patching rules and processes will help ensure success by making processes repeatable, and roles and responsibilities will make it clear who is responsible for patch deployment, testing, and monitoring, promoting efficient collaboration and reducing ambiguity.

Also read: Patch Management Policy: Steps, Benefits and a Free Template

Use a patch management system

Implement a centralized patch management system to automate and streamline the patch deployment process. A patch management tool can save on manual work, enables effective patch distribution across multiple systems, and gives centralized control and insight over the patch management lifecycle.

Create backups

Creating system backups before applying fixes will let you restore to a known working state in case of issues or failures during patching. Backups guarantee that important information and configurations can be recovered, preventing any interruptions or data loss that may result from patch deployments gone wrong. Many patch management tools contain rollback functions that accomplish the same purpose.

Keep records

In order to trace the history of installed patches, monitor for compliance, and provide evidence for audits or investigations, it is important to keep thorough records of patch-related actions. Patch management procedures should be regularly evaluated and audited to verify process effectiveness, identify potential improvement areas, and maintain a proactive and strong security posture.

Train your staff

Educate and train staff on the importance of patch management, best practices, and the risks of delayed patching. Raising awareness of the vital function patches play in ensuring system security makes staffers more aware of best practices, such as prompt patch distribution and the possible hazards connected to neglected or delayed patching, helping to build an organizational culture of proactive security.

Patch management tools have increased their support for Linux greatly over the years; most of our picks for the Top Patch Management Tools support Linux, but these three are among the most popular for Linux environments.

ManageEngine Patch Manager Plus

ManageEngine Patch Manager Plus is a comprehensive patch management solution that supports different operating systems, including Linux. Administrators can automate patch deployment, plan patching jobs, and keep track of patch compliance with the help of its centralized patch management features. A variety of capabilities are available with ManageEngine such as vulnerability detection, patch testing, and patch rollback options. To monitor patching status and compliance across the enterprise, it also offers comprehensive reporting and analytics.

Pros

  • Thorough patch testing: Prior to distribution, each patch is rigorously tested to ensure compatibility and stability
  • Cloud-based or on-premises deployment
  • Complete hardware management: Supports BIOS and hardware driver updates

Cons

  • Reporting functionality could offer more thorough insights
  • Users would like to see faster support response
  • Absence of prior client version upgrades: Better planning and preparation would be possible with prior knowledge of client version upgrades

Pricing

ManageEngine provides a wide range of pricing options. Professional plans start from $245/year up to $24,295/year. Enterprise plans range from $345/year to $37,425/year. The pricing structure takes into account different factors such as the number of servers and computers to be managed. More detailed information on ManageEngine pricing can be found here.

NinjaOne Patch Management

Another patch management program that supports Linux distributions is NinjaOne Patch Management. It offers a simple user interface for managing patches on various Linux platforms. NinjaOne focuses on automating patch detection, download, and deployment in order to streamline the patch management process. To make patch management activities more efficient, it provides patching policies, reporting, and monitoring functions.

Pros

  • Effective patch management: Users like the proactive handling of important security updates, ensuring a strong security posture
  • Modern and user-friendly: The platform is praised for its modern interface, ease of use, and seamless integration with other products.
  • Comprehensive IT asset monitoring: Users appreciate the software’s ability to monitor and manage their IT assets and network effectively.

Cons

  • Steep learning curve for non-technical users: Some features may be intimidating for those unfamiliar with Group Policy editing, Command Prompt, or PowerShell.
  • Missing features: Users mention the absence of cross-organization user accounts and SAML SSO, although these are planned for future updates.
  • 2FA Requirement: Configuration and policy changes require 2FA (Two-Factor Authentication), which can be seen as an inconvenience by some users, even as it’s a good security feature.

Pricing

NinjaOne Patch Manager doesn’t reveal pricing but is seen by users as reasonable and flexible. The Remote Monitoring and Management (RMM) vendor has a “Pay-per-device” monthly pricing structure. You can get a custom patch management quote directly from their pricing page.

Red Hat Enterprise Linux

Popular enterprise-grade Linux distribution Red Hat Enterprise Linux comes with its own patch management system. RHEL handles patching using the Red Hat Network (RHN) or the pay-per-use Red Hat Satellite. Administrators may access and distribute fixes across RHEL systems using a framework for centralized administration that is offered by RHN and Red Hat Satellite. Additionally, Red Hat offers package managers like Dandified Yum (DNF) that makes it simple for users to manage and deploy changes from the official Red Hat repositories. RHEL concentrates on giving its clients enterprise-level support, reliability, and security upgrades.

Pros

  • Effective system resource management: The program efficiently controls the use of system resources, limiting excessive resource consumption and reducing the chance that other programs may be affected by software crashes.
  • Software development is simple because of the distribution’s hassle-free installation of numerous programming languages, which makes it simple for programmers to create and build code in the languages of their choice.
  • Simplified device communication: It is simple and user-friendly to access and utilize drivers for a variety of devices, including serial and networked devices.

Cons

  • Improved software dependency management: The management of software dependencies could be improved by making it easier to identify package dependencies and the packages that depend on particular components.
  • Command-line configuration: Improving the command-line configuration of some packages, especially those that are traditionally set via the GUI, would be beneficial.

Pricing

The cost of Red Hat Enterprise Linux (RHEL) patch management is determined by various factors, such as the number of systems or subscriptions required and the level of support needed. Red Hat offers a range of subscription plans, each with its own pricing structure and support options. The Red Hat Enterprise Linux Workstation subscription starts at $179/year, while the Red Hat Enterprise Linux for Virtual Datacenters subscription is priced at $3,999/year. Red Hat also provides add-ons to complement your subscription and meet additional needs.

To obtain detailed and up-to-date pricing information for Red Hat Enterprise Linux patch management, visit Red Hat’s official website. There, you can find comprehensive information about subscription plans, pricing details, and additional add-on offerings.

Bottom Line: Linux Patch Management

Linux is at the heart of some of the most critical on-premises and cloud environments, so maintaining the open-source operating system is a critically important cybersecurity practice. Organizations can significantly improve their security posture by monitoring and regularly updating their Linux distributions, prioritizing patches, testing in controlled environments, establishing policies, implementing centralized patch management tools, creating backups, maintaining records, and training staff. A trustworthy patch management solution can help guarantee timely updates, expedite procedures, and safeguard important systems from new risks.

Further reading:

Source: https://news.google.com/rss/articles/CBMiQGh0dHBzOi8vd3d3LmVzZWN1cml0eXBsYW5ldC5jb20vbmV0d29ya3MvbGludXgtcGF0Y2gtbWFuYWdlbWVudC_SAQA?oc=5

January 30, 2024 | Tiny Core

“Bolster Your Website with Tiny Core Linux 4.0’s Upgrade to the Powerful Linux 3.0.3 Kernel – Latest from Softpedia’s Tech News”

npressfetimg-241.png

“Introducing Tiny Core Linux 4.0: The Ultimate Lightweight Distro Makeover!”

Tiny Core Linux 4.0 has arrived with a bang, boasting a plethora of freshly updated packages and the switch to the latest Linux 3.0.3 kernel. Get ready to experience the sleek new search feature that makes finding what you need a breeze, available on both AppBrowser and the handy command-line tool, ab. Now searching for individual packages, like AbiWord, or categories like “browser” is easier than ever before.

But that’s not all, let’s take a look at some of the other highlights of this game-changing release:

– Linux 3.0.3 kernel
– Upgraded udev to version 173
– Now supports 486 and 586 machines
– Updated libraries including e2fsprogs 1.41.14, GCC 4.6.1, and util-linux 2.19.1
– Busybox 1.19.2 with latest patches and nbd-client
– Improved support for dynamic kernel dependency processing
– Updated Luxi fonts to disable hinting
– New loadcpufreq added
– Modified xsession for easy X startup troubleshooting
– And more!

Say goodbye to rarely used tools like the Floppy Tool as this release also marks their deprecation. And let’s not forget the heart and soul of Tiny Core Linux – being the ultimate lightweight distro with a tiny footprint of only 10 MB. But don’t let its size fool you, Tiny Core Linux comes fully equipped with a graphical environment, and you can easily customize it with additional packages and functionality to fit your needs.

Get your hands on the latest Tiny Core Linux 4.0 now and experience speed, efficiency, and minimalism at its finest. Don’t miss out, download it today on Softpedia.

Source: https://news.google.com/rss/articles/CBMia2h0dHBzOi8vbmV3cy5zb2Z0cGVkaWEuY29tL25ld3MvVGlueS1Db3JlLUxpbnV4LTQtMC1NYWtlcy10aGUtU3dpdGNoLXRvLXRoZS1MaW51eC0zLTAtMy1LZXJuZWwtMjIzODA4LnNodG1s0gEA?oc=5

January 30, 2024 | Tiny Core

“Experience the Latest in Cutting-Edge Technology with Tiny Core Linux 6.2: Enhanced NFS4 Support and Dual-Boot Capability for Both Legacy and UEFI Systems – Softpedia”

npressfetimg-240.png

“Discover the World’s Smallest Linux OS – Tiny Core 6.2 Now Available for Download!”

Prepare to be amazed by the latest release of Tiny Core Linux 6.2 – the smallest Linux kernel-based OS in the world. Developed by a team of highly skilled and talented individuals, Tiny Core boasts incredible speed and efficiency with its new speedup patches and improved support for NFS4 filesystems. But that’s not all, as this update also brings a faster tce-load component and a more streamlined tce-ab symlink. And fear not, as even slow CD-ROM drivers will no longer slow down the powerful tce-setup component.
Join the ranks of satisfied users who have already experienced the lightning-fast performance of Tiny Core 6.2. Download now from Softpedia or directly from our website – available for both 32 and 64-bit hardware architectures. Don’t miss out on the opportunity to discover the smallest and most dynamic Linux OS on the market.

Source: https://news.google.com/rss/articles/CBMif2h0dHBzOi8vbGludXguc29mdHBlZGlhLmNvbS9ibG9nL1RpbnktQ29yZS1MaW51eC02LTItQnJpbmdzLU5GUzQtU3VwcG9ydC1hbmQtYS1MZWdhY3ktQklPUy1VRUZJLU11bHRpLWJvb3QtVmVyc2lvbi00ODAxNDkuc2h0bWzSAQA?oc=5

January 30, 2024 | TrueNAS

Should you build your own NAS or buy one? Unraid vs. TrueNAS vs. Synology – 9to5Toys

While we’ve talked about network attached storage (NAS) devices many times, and we’ve taken an in-depth look at Blair’s massive 80 TB setup, we’ve never really looked at why you should pick different storage systems. Synology is quite popular, but so is TrueNAS and Unraid. If you’ve not heard of the latter two, however, nobody could blame you. However, in our guide today, we’re going to review Unraid vs. Synology vs. TrueNAS in the ultimate showdown to see which is the best for various different storage setups.

Essentially, Synology is synonymous with simple, easy-to-use hardware and software that you can pick up at most major electronics retailers, plug in, and be up and running. However, TrueNAS and Unraid are software that you install on existing hardware that’s already sitting at your home or business, which makes entry much easier for many folks on lower budgets. Do any of these options sound interesting to you? If so, then let’s take a closer look at why you should have a NAS, and which you should choose down below in our head-to-head Unraid vs. Synology vs. TrueNAS review.

Table of Contents

Why should you have a NAS?

Network attached storage solution, or NAS for short, allows you to enjoy storage from anywhere in your home, or the world (depending on how it’s configured) without having to plug an HDD or SSD into your computer. That’s right, you can even have multiple TB of storage available from several drives paired together for greater storage options.

Most NAS setups are similar: a central machine that can hold one or more HDD or SSDs, connected to the internet (internal or external), and mountable on your machine. The functions of each NAS vary, as some can run software like Docker (more on that later) and others can function as Time Machine backups for your computers at home. Essentially, a NAS is a great way to have tons of storage available to your computers or devices without having to plug drives in. A NAS can also employ RAID (or some form of redundancy) to help protect against drive failures, should you have an HDD or SSD start to go bad at any point. There are quite a few options on the market for picking up or setting up a NAS, so we’ll only be looking at the three most popular options today.

Unraid vs. Synology vs. TrueNAS: Simplest setup goes to Synology

Off-the-shelf solution

Synology is likely the first name that most people think of when trying to pick up a NAS. They’re simple solutions that can be picked up at most retailers and honestly accomplish the main task of a NAS: allowing you to access your storage anywhere with ease. There’s little configuration required, and since you can even pick up units that come with storage already installed, it’s a simple solution that many will opt for.

Easy to use interface and quite powerful software

Synology’s DSM (Disk Station Manager) is also very simple. That’s the operating system which the NAS runs on, and is by far my favorite OS for a NAS. I’ve used a few different brands, namely Synology and NETGEAR’s ReadyNAS, and Synology by far is the easier OS. It makes configuration and expansion of your array quite simple, and things like SHR (Synology Hybrid RAID) ensures that you can get the most out of whatever drives you have inserted into the machine. Should you opt for SHR (which I recommend you do), then expanding your array is quite simple. Just remove any drive from the array and replace it with a larger one. From there, you just tell the software what you did and it begins expanding to have a new capacity that includes the increased size of the drive you just inserted. This operation can take a while, but overall, it’s quite seamless and easy to do. The only thing to keep in mind is that you can’t downsize the array, so if you want to take a larger drive out (or have one fail) and replace it with a smaller one, that’s not a supported operation.

Multiple RAID options so you can choose what works best for you

We’ve already talked about SHR a bit, which is what we recommend people use, as it’s simple and functional. However, Synology also has support for other RAID methods and the ability to be redundant to multiple drives failing in a single array. This means that you can lose one (or more) drives without losing all the data that is stored in your array. Whether you choose SHR, RAID5, RAID1, or even RAID0, Synology is quite flexible in redundancy options.

Built-in apps to handle most daily tasks

Synology has quite a few built-in applications to handle your daily tasks as well. It can function as a Time Machine right out of the box, and you’ll also find that the package store available on Synology is quite robust. There’s Plex, Synology Moments, calendar options, Docker, MailPlus, and many more. Docker is another subset of applications that you can install and run on your supported Synology NAS, which further broadens the software library that you can choose from.

Most packages are simple one-click installs and make it really easy for you to get something up and going as seamlessly as possible. Synology is really the one stop shop for an extremely simple, yet capable, NAS that anyone can use.

Unraid vs. Synology vs. TrueNAS: Most robust feature set and flexibility goes to Unraid

Generally ran on old (or new) consumer- or professional-grade computer hardware

While Synology sells ready-to-go systems that you just buy, plug in, and use, Unraid takes a slightly different approach. Personally, Unraid is what I choose to use at home for my storage solution. I used Synology for years, tried out TrueNAS (more on that below), and eventually partnered with Unraid to build an insane storage setup. While my Unraid server started on a spare Ryzen desktop that I had at home, it’s now being run off an old enterprise-grade Lenovo RD440 rack-mount server. I’ve not had to do any reinstalls in order to switch over between three different computer setups, and I absolutely love that. Just move the USB installer and you’ll be ready to go.

Right now, my Unraid server is quite beefy and powerful. It has 12 cores and 24 threads, 64 GB of DDR3 ECC RAM, and a total of 47 TB of storage (though 8 TB of that is my parity drive, and another 1 TB is my cache). My RD440 server also has another three drive slots available for me to use at any time when I want to expand, which is something I plan on doing in the future, as right now I’m sitting at around 70% used of my available 38 TB.

Really, Unraid’s strength comes from being able to run on just about any hardware you have at home, be it new or old. Many will build new ultra-high-end systems and run their own computer off a virtual machine, but others only want the storage aspect and choose to use their old desktop after they upgrade. The choice really is yours. Minimum requirements for Unraid are quite low, which is what allows it to run on such a vast array of configurations. Essentially, if you have a 64-bit processor clocked at 1 GHz or higher and 2 GB of RAM, Unraid will run with ease.

Fairly simple setup, but some configuration required

When I wanted to replace my aging 4-bay Synology with a storage solution that was more robust and could handle more drives, I wanted something that was almost as easy as Synology’s DSM to use. Sure, I’ve done system administrations for years, but that doesn’t mean I want my home networking setup to be something that you need to have a network engineering degree to figure out. After searching high and low, I landed on Unraid.

Unraid is the base OS for your storage server, much like Synology’s DSM. You’ll have to run it “bare metal,” meaning that you boot the system off of the Unraid USB drive. I picked up a 32 GB USB 2.0 drive to use specifically for Unraid, and it’s worked wonderfully. Don’t worry about USB 2.0 being slow, as once the system boots, the OS is loaded into RAM, so it’s fast.

Setup is quite simple. Boot the USB drive, look at what the IP of the server is, and access that from the web browser of your normal computer. You’ll do little to no configuration on the server itself, as most is done in the web browser. Unraid is free to use for 60 days so you can trial it, but the licenses are quite budget-friendly, depending on how many attached drives you have. Pricing starts at $59 for six devices, $89 for 12, and $129 for unlimited. You can, however, upgrade from Basic or Plus to Plus or Pro at any time.

Setting up your array is quite simple

Once you’re up and going, registered, and ready to use it, just use it. You’ll navigate to the “Main” tab, which is where you’ll see the “array” of drives you have. Assign your largest drive as the parity, and all other drives below that. If you have a cache drive (which we recommend using an SSD for), that section is below the standard array settings. I have my array set to scan monthly for parity issues, which are rare to find. The way Unraid’s parity system works is quite interesting, but honestly it’s very in-depth and a bit hard to understand. If you want to learn more about how it works, this video will help you understand it much better than I could.

At any time that you want to remove a disk from the array, or add one, you’ll just have to stop it, remove/insert, and then start it again. It’s quite a simple process, and you can even upgrade your parity drive in the future. The main thing to keep in mind is that your parity drive has to always be the largest drive in the system. All other storage devices can be as big or smaller than the parity drive.

You’ll be redundant to one drive failure, but each drive has its own file system so you’ll never lose it all in the event of hardware going bad

Unraid works on a parity-based redundancy system, like we mentioned above. Essentially, one drive in your system keeps a backup of all the other drives, so if you lose one, then it can rebuild the data that was there. However, unlike normal RAID systems, Unraid doesn’t actually rely on all of the drives for this to work. Your parity drive backs up each drive individually, in a sense. So, if you lose two drives, you won’t lose everything. What was on the two failed drives will indeed be gone and unrecoverable, however, the rest of the drives in your system have their own independent file system so they can still be read and used like normal. In fact, when setting up your storage shares, you can even specify what drives you want data stored on, helping to choose where things go should something fail.

Built-in Docker + VM access for a robust app library

Once you’re set up hardware-wise, it’s time to turn your attention to the software running on your Unraid server. Out of the box, it supports Docker and virtual machines, which instantly give you access to a plethora of applications to run in just a few clicks. Most Dockers are optimized for Unraid, but even if it isn’t, just a few minutes can let you launch any Docker container on your server. I run quite a bit on my server, but, to name a few Docker containers, you’ll find MariaDB for database management, Plex, HomeBridge, ddclient (Dynamic DNS), Ghost (my personal website), Swag (for reverse proxy and SSL), Ripper (to automate ripping DVDs and CDs to my server), and even Macinabox (which allows you to run a macOS virtual machine in just a few clicks). On the virtual machine side of things, it’s a bit more lightweight, as I really only have HASS.IO and a macOS virtual machine installed, depending on what I need at the time. But, given the resources my server has, spinning up new VMs for testing out an OS or software, or adding extra docker containers, is an extremely simple task. Unraid is also a very lightweight operating system, meaning that the most resources possible are saved for whatever software you decide to install.

Unraid vs. Synology vs. TrueNAS: Storage-first solution with robust data protection goes to TrueNAS

Similar to Unraid, you’ll run TrueNAS on old (or new) consumer- or professional-grade computer hardware

TrueNAS is similar to Unraid as in it’s an operating system that you can run on new or old consumer- or professional-grade computers. I tried out TrueNAS on my old Ryzen system, just like I did Unraid. TrueNAS has very similar requirements to Unraid, and many people use it in the exact same way as Unraid.

However, unlike Unraid, you can install TrueNAS however you’d like, instead of needing to use a USB drive. This can be a bit simpler for people, as they can install it on an SSD that they have already lying around their house instead of having to buy a new dedicated USB flash drive for it. However, unlike Unraid, TrueNAS needs much more in the resource department for run. While it has a similar 64-bit processor requirement, you’ll need at least 8 GB of RAM for TrueNAS to run. This is four times what Unraid requires, and might not be available in your older system.

More complex setup, not as robust features

While TrueNAS can run on the same type of hardware as Unraid, and even can be installed on a normal SSD, you’ll find that the overall setup it a tad bit more complicated for not as many features. Upon booting to the USB installer, you’ll have to configure where you want the OS to live inside of a DOS-style prompt. After installing, you’ll reboot, remove the flash drive installer, and then TrueNAS will show you the IP at which it’s located. From here, configuration is a bit similar to Unraid in that you’ll access TrueNAS from its IP on a computer, log in, and begin setting things up.

Unlike Unraid, you won’t get nearly as robust virtual machine support, and Docker is nonexistent here unless you’re running TrueNAS Scale, which is still under active development and isn’t ready for prime-time yet. However, there are plugins and jails that you can run, though the options are much fewer than what you’ll find on Unraid.

OpenZFS is nice for multi-disk redundancy, but really cuts down on available storage

So we’ve talked about how Unraid is only redundant to one drive failure, and how Synology has multiple redundancy options to choose from. TrueNAS is different than both of those options. Based on the OpenZFS file system, TrueNAS allows you to choose the redundancy of each volume, though you’re more locked in one a decision is made.

When you go to create a volume, you’ll be given quite a few options to choose from. TrueNAS documentation goes into more detail here, but generally we’d recommend using RAIDZ1 or RAIDZ2. This means you’re redundant to one or two disk failures, but also requires you to have at a minimum three or four disks in the system. TrueNAS goes up to RAIDZ3, which is redundant to three disk failures, but requires five overall disks to function. This can be chosen on a per-volume level, though only one volume can exist per drive. So, if your entire system contains eight disks, and you choose RAIDZ3, then you’ll only have access to five of the drives for storage, and you’ll never be able to shrink that without re-creating the volume.

When I originally used TrueNAS, I created a mirror backup, which had two drives in it as it mirrored one drive to the other. I then picked up an additional drive of the same size, and wanted to expand the mirror to hold three drives, only to find out I’d have to expand in pairs, and still only get two of the four drives for storage, as the other two would be held back for backups. This is more robust in backing things up, as you’re redundant to one drive failure per pair, but also cuts your storage abilities in half. Also, when using RAIDZ, you also can’t expand the volume beyond what is created. Basically, to add more storage, you have to add multiple drives to the system and create a new volume. Supposedly, TrueNAS is working on making this easier, but it’s not out yet and we’re not sure when it will launch, so if you want a more flexible system, we recommend going with Unraid or Synology.

Overall thoughts

The simplest option is a pre-built Synology NAS if you have the money for it

In the end, if you just want a system that you can plug-in, configure, and forget about, we recommend Synology. You don’t have to custom-build anything, hardware is readily available preconfigured, and it’s simple to set up.

Anyone can set up Unraid on an old computer and enjoy network attached storage on a budget

However, if you want an extendable, versatile networked storage solution that’s both powerful and simple, Unraid is the best option. It can run on just about any hardware, features redundancy, and also offers built-in Docker and virtual machine support. It’s truly the best all-around option for a robust storage server that won’t break the bank, and also offers the greatest amount of flexibility. Sure, it takes a bit more to configure, but in the end, will offer many more features than the other two options presented here.

TrueNAS is great for those who want a storage-first solution that has data security in mind

If you really only need a storage solution, and have no desire to run Docker containers or virtual machines, TrueNAS is still a fantastic option. Thanks to the robust redundancy of OpenZFS, you’ll have additional peace of mind in data security that Unraid and Synology don’t generally offer. Sure, it brings lower total amounts of storage to the game, and is less flexible in setup, but, it also ensures that you have the most redundancy possible for hardware failure, and for some, that’s the most important thing.

9to5Toys’ Take

In the end, I chose Unraid for my storage server at home because it’s extendable, flexible, and robust. I can run any software I need with a few clicks, and adding additional storage is just as simple. I just slide a new drive in, click “Add,” and it’s there. If I outgrow my existing 12-bay server, then I just migrate the disks and USB drive to a new server and I’m ready to go. It’s just that simple.

However, for many, Synology will be an easier solution since it’s available at a variety of retailers, and you can just slide disks in and go. Though the main thing with Synology is that you’re locked into their ecosystem, and it’s not quite as easy to move to a NAS with more bays should you outgrow whatever you buy.

Do you have a NAS at home? Have you been considering picking one up? Well, now’s a great time as any to start, so sound off in the comments below what your favorite method of storing data at home is.


Subscribe to the 9to5Toys YouTube Channel for all of the latest videos, reviews, and more!

January 30, 2024 | MakuluLinux

New MakuluLinux Release Brings AI to the Max – TechNewsWorld

The first beta release of MakuluLinux Max puts artificial intelligence inside everyday desktop usage.

Max’s AI entity Electra could well be the start of a new type of innovation adopted by other distribution developers. It raises the bar on what Linux users should expect from their computing platform.

Max is a new distro for MakuluLinux developer Jacque Montague Raymer. It is the first AI-integrated operating system designed with Debian compatibility running on the Gnome backend framework. It utilizes multiple desktop layout features ported from one of his other distros, Shift, and offers lots of flexibility in terms of themes and customizing options.

I rarely review early-phase beta releases. This one, however, performs with polish and is suitable for daily use. I fiddled with Max’s alpha releases the last few months and could not wait for the beta.

I am so impressed that I installed Beta 1 on a work computer and installed numerous specialized applications I use for tracking research, cloud storage, and productivity. It has a few minor glitches, but we expected that with new distros just out of the development labs.

The AI Integration options really stand out. I have written about several AI startups in recent months. All of them required third-party requests for access and cloud-delivered connections.

Not so with Max’s own AI model that Raymer created. Early versions of the AI feature used the OpenAI-developed model. The MakuluLinux developer later built his own, so it was simple for users to access without needing anything else or paying any fees, he noted.

He tweaked coding from Llama/Palm2/Google and Bing AI models to create an AI experience that runs 50 processes and is capable of handling an average of 400 queries per minute.

Maturing Family Lineage

MakuluLinux arrived on the Linux scene in 2015 with a different approach to implementing Linux OS features that Raymer hoped would disrupt the status quo. I have reviewed nearly one dozen MakuluLinux releases since then.

Each one involved a different desktop option. Each one introduced new features and improvements that gave MakuluLinux the potential to challenge long-time major Linux distro communities.

What Max brings to the open-source OS community should establish this South Vietnam-based Linux developer with key player status in the Linux distro field.

The integrated AI feature is the most innovative aspect of the new MakuluLinux Max distro. Its revamped desktop is tweaked to provide much more than a stock version of Gnome, as the desktop settings let you tweak the look and feel to include elements of Xfce and MATE.


His developmental journey brought to users over the years a variety of desktop configurations ranging from LinDoz, Flash, Core, Gamer, and Shift to a few more. Of the previous selections, only Gamer and Shift remain available. The addition of Max gives you three distro choices of MakuluLinux.

This Max edition, however, brings MakuluLinux to its performance pinnacle. The extended inclusions of Gnome extensions, numerous code tweaks to finesse features found in other desktops, and now the AI integration combine to produce an innovative Linux platform that will be challenging for other distro developers to beat.

Remember, Max is only a beta 1 release. It is already polished and reliable as a daily computing platform.

AI-Integration Game Changer

When I first ran Max in a live session, Electra’s face popped onto the screen and was perfectly synched with voice and lip movements as the entity presented an overview of the built-in AI capabilities. That first encounter was sort of creepy!

Electra is very capable of operating within this Linux OS. It told me so in as many words.

Actually, referring to Electra as an “it” after the on-screen introduction is awkward. So I am assigning Electra’s — ahem, preferred pronouns of she/her — for the purposes of this review.

Electra communicates in natural language, answering questions comprehensively and informally and generating different creative text content formats.

She can also collaborate on tasks like code generation, creating code snippets, debugging and explaining code, writing jokes, music, social media content, blogging, generating summaries of web pages, researching, and communicating using voice input, etc.

For clarification, Max is running on what today’s standards would call “underpowered” innards.

My HP Inspiron laptop is eight years old and is powered by an Intel Pentium N3700 (4) processor running 8GB RAM. It has balked at running other Linux distros. But its performance running Max has been a pleasant experience.

Electra Helped Shape AI Features in Max

Raymer noted in his website comments and in our email exchanges about the Max release that Electra actually had a critical role in helping him develop some of the Max distro’s other AI-powered features, such as the Virtual Cam, Image Generator, Story Teller, and Widgets System.

“Electra had a big hand in the development of each of those tools … in fact, many of the AI applications we use now in Max were in part created with the assistance of Electra,” said Raymer.

The AI system residing in Max is very capable. It learns daily and will continue to grow over time, he added.

“Electra has excellent memory. Each conversation is tracked with an ID, ensuring continued uninterrupted conversion. This was also very important to us,” Raymer said.

Smart on Several Levels

Electra has numerous ways of accessing neural technology: voice, widgets, text, web, and terminal interface. The voice connection lets users control most of their computers using only vocal commands. The operating system also comes with a whole bunch of custom-written AI applications created by MakuluLinux to give an extra-unique experience.

Voice integration operates much like Siri and Google do on mobile handsets. Simply say, “Hey, Electra.” With an average of seven seconds of communication delay, you get a reply in under 10 seconds.

Launch the terminal interface from the desktop to use either text or voice to communicate while seeing the responses typed in the window. This feature makes it easy to copy/paste for storage or sharing.

The resizable and draggable desktop widget runs from the panel to communicate with the AI using text in a mini browser environment. You can also communicate with Electra via Makulu’s web interface and the desktop chat client launched from the main menu.

Information mode is accessible only via the built-in web/Android interface designed to give quick results with web links to reference data. Here you can get verifiable links to support the text responses that typically come in one to three seconds.

My AI Experiment on Max

I gave into a compelling inner sense to challenge Electra AI and asked her to create a small rhyming poem about Linux.

I typed the command into the terminal window as described above. Not quite sure if the results were a one-time wonder or if extended creativity was possible, I asked Electra for a second effort.

See for yourself. On the left is poem one; on the right is poem two.

Bottom Line

MakuluLinux Max is a new offering that opens the world of artificial intelligence to the Linux desktop. It raises the operating system to an experience not available anywhere else.

It uses a tweaked Gnome Software Center that fully supports Flathub and Snaps. You can also use the installed Synaptic Package Manager.

Max comes with Steam pre-installed. Additional gaming platforms are available from the Software Center.

The MakuluLinux Max Desktop Manager lets you customize and theme your desktop. You can select from system layouts, constructing tools, special effects, cursors, icon sets, wallpaper management, and theme colors. All changes are almost instant; no scripts, no commands, and no reboot required.

The Max OS also comes with the Constructor Tool that lets you recompile the operating system with all your changes back into an ISO to reinstall on other computers. This feature has been a staple in other MakuluLinux editions for the last few years.

I have been keeping an eye out for an early release of Beta 2, but it is still early for this next phase. However, I can report that in the last few days, prior to publishing this review, Max Beta 1 received a series of patches and package upgrades that focused on adding more finesse and capabilities to the AI system.


Suggest a Review

Is there a Linux software application or distro you’d like to suggest for review? Something you love or would like to get to know?

Email your ideas to me, and I’ll consider them for a future column.

And use the Reader Comments feature below to provide your input!