Puppetmanaged.org is a collection of Puppet modules for deploying common services like Apache, MySQL, and more. Each module contains file declarations to set up the service and uses classes and definitions to make the modules easy to use. Users can contribute modules by storing them in git repositories and collaborating via mailing lists.
Installing Software, Part 2: Package ManagersKevin OBrien
This document discusses different levels of package management tools in Linux, from graphical user interfaces down to command line tools. It focuses on the command line tools YUM and RPM for RPM-based systems like Fedora, and APT and DPKG for Debian-based systems like Ubuntu. It explains how to add software repositories, update package lists, install, upgrade, and remove packages from the command line. It also discusses manually installing packages using the lowest-level RPM and DPKG tools when packages are not available in repositories.
This document discusses Linux disaster recovery and the open source tool Relax and Recover (rear). It begins by explaining that disasters do happen and outlines the basics of disaster recovery and business continuity. It then discusses the need for disaster recovery plans for Linux systems and some common open source disaster recovery software options, including rear. Rear allows users to create bootable rescue media, backup systems, and facilitates fast disaster recovery.
The document provides an introduction to operating systems, kernels, shells, Linux, and the differences between Linux and Windows. It discusses that an operating system consists of system software that acts as an intermediary between the user and computer hardware. The kernel is the core of the operating system and constantly runs, while the shell provides an interface between the user and kernel. It then covers Linux features such as being open source, modular, offering choices of desktop environments, and being portable. It also compares Linux and Windows in areas such as licensing, market share, filesystems, installation, and configuration.
This document provides an overview of Ubuntu Desktop training, including:
- The most popular Linux distributions are Ubuntu, Fedora, and openSUSE. Ubuntu focuses on usability for new and home users.
- Ubuntu installation can be done via live CD or USB drive and involves selecting language, timezone, partitioning disks, and providing user details.
- Ubuntu supports package management via repositories, software installation and removal tools like Synaptic, and multimedia, development, networking, communication, and productivity applications.
The document provides information about the history and development of Linux. It states that in 1991, Linus Torvalds, a Finnish computer science student, released the first version of the Linux kernel. Though intended as a hobby project, Linux gained significant support from other developers over the years. The kernel was expanded to be capable of more than its original capabilities.
This document provides an overview of the Linux operating system, including its core components and popular desktop environments. It defines Linux as a collection of open source software programs distributed together with the Linux kernel. The kernel acts as an intermediary between hardware and software. Popular desktop environments for Linux include GNOME and KDE, which differ in terms of default layout, menu navigation, and other usability features. The document also discusses key open source projects like GNU and differences between various Linux distributions such as Ubuntu, Linux Mint, and Fedora.
The document is an introduction to Linux presented by Vikash Agrawal. It discusses Linux's open source philosophy, history originating from Unix, advantages like low cost and flexibility, and widespread uses including by hackers, supercomputers, Google, Wikipedia and the New York Stock Exchange. It also covers major Linux distributions like OpenSUSE, Red Hat, Debian, Knoppix, Fedora and Slackware.
This document provides an overview of a 2 hour Linux workshop. It will cover the history and architecture of Linux, the file system, basic commands, and software management. No prior Linux experience is necessary. The workshop will focus on Ubuntu but discuss other Linux flavors. It will start with the history of UNIX and the GNU project. It will then cover the Linux kernel, open source software, Ubuntu releases, filesystems like ext3 and ext4, files and directories, basic commands, and installing, removing, and upgrading software using tools like apt, Synaptic, and command line commands.
This document provides an introduction to Linux, UNIX, and GNU. It describes UNIX as an operating system originally developed at Bell Labs in the 1970s. Linux is a freely distributed implementation of the UNIX kernel and is very similar to UNIX. GNU stands for GNU's Not UNIX and is a free UNIX-like operating system that contains no UNIX code. The document discusses common Linux directories for programs and explains that text editors and C compilers are essential tools for programming in Linux. It provides a simple "Hello World" example C program to demonstrate writing, compiling, and running a basic Linux program.
Here are the key differences between relative and absolute paths in Linux:
- Relative paths specify a location relative to the current working directory, while absolute paths specify a location from the root directory.
- Relative paths start from the current directory, denoted by a period (.). Absolute paths always start from the root directory, denoted by a forward slash (/).
- Relative paths are dependent on the current working directory and may change if the working directory changes. Absolute paths will always refer to the same location regardless of current working directory.
- Examples:
- Relative: ./file.txt (current directory)
- Absolute: /home/user/file.txt (from root directory)
So in summary, relative paths
Linux is a free and open-source operating system assembled under a collaborative development model. The Linux kernel was first released in 1991 and has since been ported to run on various hardware platforms. It is widely used today for servers, supercomputers, embedded systems like Android, and desktop systems. Common Linux distributions include desktop environments like GNOME or KDE and include applications like Firefox, LibreOffice, and GIMP. Programming languages widely supported on Linux include C, C++, Java, Python, and Perl. The document then discusses advantages of Linux like low cost, stability, flexibility, security, and its open source nature.
Linux is an open-source operating system based on the Linux kernel. It was created in 1991 by Linus Torvalds and has since grown significantly through contributions from its worldwide community of developers and users. Linux is commonly used for servers, but also powers many smartphones, smartwatches, and embedded devices. It is free to use and modify under open-source licenses like the GNU GPL.
The document provides an overview of Linux, including its history and features. It discusses how Linux originated from the GNU project and was started by Linus Torvalds. Linux is an open source operating system that can run on various platforms. It provides features like multi-user access, multitasking, and security benefits compared to other operating systems. The document also describes the typical Linux desktop environment and popular software applications available for Linux.
This document provides an overview of the Linux operating system, including its history, design principles, and key components. It describes how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since grown through collaboration into a full-fledged open source operating system compatible with UNIX standards. The document outlines Linux's modular kernel architecture, use of kernel modules, process and memory management designs, and standards-compliance.
Part 1 of 'Introduction to Linux for bioinformatics': IntroductionJoachim Jacob
This is part 1 of the training session 'Introduction to Linux for bioinformatics'. We explain in very general terms what Linux is and stands for, and how we can get access to it. Interested in following this training session? Please contact me at http://www.jakonix.be/contact.html
This document covers the basics of installing Linux systems, including the boot process, distribution selection, installation options, and basic administration tasks. It discusses BIOS initialization, the boot loader, kernel initialization, and the init process. Installation methods like USB, DVD, and network installation are presented. Basic system administration like services, processes, mounting disks, and shutdown are also outlined.
The document provides an introduction to Linux, including the purpose of operating systems, key features of the Linux OS, the origins of Linux, common Linux distributions, and uses of Linux in industry. It discusses how Linux works with the Linux kernel and open source software like GNU to form the operating system. It also covers Linux distributions, common applications, and uses of Linux as a server, workstation, for scientific/engineering purposes, and more.
Linux was created by Linus Torvalds in 1991 based on UNIX. It is an open source operating system with a modular design consisting of the kernel at the core which manages memory, processes, and hardware access. The shell provides a command line interface between users and the kernel while the file system arranges files in a hierarchical structure with everything treated as a file. Common directories include /bin, /sbin, /etc, /dev, /proc, /var, /tmp, /usr, /home, and help is available through man pages or command --help.
Get to know linux - First steps with UbuntuMaja Kraljič
The document discusses installing and using Ubuntu Linux. It covers comparing commercial operating systems to Linux, the four essential freedoms of free software, popular Linux distributions like Ubuntu, installing Ubuntu via a live USB or virtual machine, using the Ubuntu operating system, and learning more about the Linux terminal and programs.
Covers the IBM - Lotus commitment to Linux and ways a business customer can create business saves by running their supported Lotus product offerings on Linux. Includes a few admin tips and tricks too.
OpenBSD is a free, open-source Unix-like operating system descended from Berkeley Software Distribution (BSD). It was created in 1995 when project leader Theo de Raadt forked from NetBSD. OpenBSD emphasizes portability, security, and integrated cryptography. It is developed and maintained by volunteers and finances itself through donations and selling installation media.
Top linux distributions & open source Browserspawan sharma
1. Linux distributions take the Linux kernel and combine it with other free software to create complete operating system packages. Some popular Linux distributions include Red Hat Enterprise Linux, Ubuntu, Linux Mint, OpenSUSE, CentOS, and Debian.
2. The Linux kernel was created by Linus Torvalds in 1991 building on earlier work by Richard Stallman and others. It has since been developed collaboratively by thousands of programmers worldwide.
3. The three most popular web browsers are Mozilla Firefox, Google Chrome, and Opera. Firefox has over a billion users and supports extensions and standards compliance. Chrome provides bookmarks, security features, and a store for extensions. Opera has features like speed dial and tabbed browsing.
This document provides an overview of the Linux operating system. It discusses that Linux was developed as an alternative to expensive UNIX operating systems and as a free software project. The document outlines the history from the GNU project in 1984 to Linus Torvalds developing the initial Linux kernel in 1991. It describes how Linux is now widely used on servers, supercomputers, embedded systems, and desktop computers. The key advantages of Linux discussed are that it is free, open source, powerful, stable, and secure.
I Am Linux-Introductory Module on LinuxSagar Kumar
This module covers Introduction to Linux, History of Linux, Features of Linux, Advantage of Linux, File System Hierarchy Standard, Knowing root, Linux Commands, Working with Files and Directories, etc.
Crafting GNU/Linux distributions for Embedded target from Scratch/SourceSourabh Singh Tomar
Following content is pehaps next step for Embedded Linux distribuiton process! Working out the entire process from sources. This time more elaborative.
This document provides an overview of the Linux operating system, including its core components and popular desktop environments. It defines Linux as a collection of open source software programs distributed together with the Linux kernel. The kernel acts as an intermediary between hardware and software. Popular desktop environments for Linux include GNOME and KDE, which differ in terms of default layout, menu navigation, and other usability features. The document also discusses key open source projects like GNU and differences between various Linux distributions such as Ubuntu, Linux Mint, and Fedora.
The document is an introduction to Linux presented by Vikash Agrawal. It discusses Linux's open source philosophy, history originating from Unix, advantages like low cost and flexibility, and widespread uses including by hackers, supercomputers, Google, Wikipedia and the New York Stock Exchange. It also covers major Linux distributions like OpenSUSE, Red Hat, Debian, Knoppix, Fedora and Slackware.
This document provides an overview of a 2 hour Linux workshop. It will cover the history and architecture of Linux, the file system, basic commands, and software management. No prior Linux experience is necessary. The workshop will focus on Ubuntu but discuss other Linux flavors. It will start with the history of UNIX and the GNU project. It will then cover the Linux kernel, open source software, Ubuntu releases, filesystems like ext3 and ext4, files and directories, basic commands, and installing, removing, and upgrading software using tools like apt, Synaptic, and command line commands.
This document provides an introduction to Linux, UNIX, and GNU. It describes UNIX as an operating system originally developed at Bell Labs in the 1970s. Linux is a freely distributed implementation of the UNIX kernel and is very similar to UNIX. GNU stands for GNU's Not UNIX and is a free UNIX-like operating system that contains no UNIX code. The document discusses common Linux directories for programs and explains that text editors and C compilers are essential tools for programming in Linux. It provides a simple "Hello World" example C program to demonstrate writing, compiling, and running a basic Linux program.
Here are the key differences between relative and absolute paths in Linux:
- Relative paths specify a location relative to the current working directory, while absolute paths specify a location from the root directory.
- Relative paths start from the current directory, denoted by a period (.). Absolute paths always start from the root directory, denoted by a forward slash (/).
- Relative paths are dependent on the current working directory and may change if the working directory changes. Absolute paths will always refer to the same location regardless of current working directory.
- Examples:
- Relative: ./file.txt (current directory)
- Absolute: /home/user/file.txt (from root directory)
So in summary, relative paths
Linux is a free and open-source operating system assembled under a collaborative development model. The Linux kernel was first released in 1991 and has since been ported to run on various hardware platforms. It is widely used today for servers, supercomputers, embedded systems like Android, and desktop systems. Common Linux distributions include desktop environments like GNOME or KDE and include applications like Firefox, LibreOffice, and GIMP. Programming languages widely supported on Linux include C, C++, Java, Python, and Perl. The document then discusses advantages of Linux like low cost, stability, flexibility, security, and its open source nature.
Linux is an open-source operating system based on the Linux kernel. It was created in 1991 by Linus Torvalds and has since grown significantly through contributions from its worldwide community of developers and users. Linux is commonly used for servers, but also powers many smartphones, smartwatches, and embedded devices. It is free to use and modify under open-source licenses like the GNU GPL.
The document provides an overview of Linux, including its history and features. It discusses how Linux originated from the GNU project and was started by Linus Torvalds. Linux is an open source operating system that can run on various platforms. It provides features like multi-user access, multitasking, and security benefits compared to other operating systems. The document also describes the typical Linux desktop environment and popular software applications available for Linux.
This document provides an overview of the Linux operating system, including its history, design principles, and key components. It describes how Linux originated as a small kernel developed by Linus Torvalds in 1991 and has since grown through collaboration into a full-fledged open source operating system compatible with UNIX standards. The document outlines Linux's modular kernel architecture, use of kernel modules, process and memory management designs, and standards-compliance.
Part 1 of 'Introduction to Linux for bioinformatics': IntroductionJoachim Jacob
This is part 1 of the training session 'Introduction to Linux for bioinformatics'. We explain in very general terms what Linux is and stands for, and how we can get access to it. Interested in following this training session? Please contact me at http://www.jakonix.be/contact.html
This document covers the basics of installing Linux systems, including the boot process, distribution selection, installation options, and basic administration tasks. It discusses BIOS initialization, the boot loader, kernel initialization, and the init process. Installation methods like USB, DVD, and network installation are presented. Basic system administration like services, processes, mounting disks, and shutdown are also outlined.
The document provides an introduction to Linux, including the purpose of operating systems, key features of the Linux OS, the origins of Linux, common Linux distributions, and uses of Linux in industry. It discusses how Linux works with the Linux kernel and open source software like GNU to form the operating system. It also covers Linux distributions, common applications, and uses of Linux as a server, workstation, for scientific/engineering purposes, and more.
Linux was created by Linus Torvalds in 1991 based on UNIX. It is an open source operating system with a modular design consisting of the kernel at the core which manages memory, processes, and hardware access. The shell provides a command line interface between users and the kernel while the file system arranges files in a hierarchical structure with everything treated as a file. Common directories include /bin, /sbin, /etc, /dev, /proc, /var, /tmp, /usr, /home, and help is available through man pages or command --help.
Get to know linux - First steps with UbuntuMaja Kraljič
The document discusses installing and using Ubuntu Linux. It covers comparing commercial operating systems to Linux, the four essential freedoms of free software, popular Linux distributions like Ubuntu, installing Ubuntu via a live USB or virtual machine, using the Ubuntu operating system, and learning more about the Linux terminal and programs.
Covers the IBM - Lotus commitment to Linux and ways a business customer can create business saves by running their supported Lotus product offerings on Linux. Includes a few admin tips and tricks too.
OpenBSD is a free, open-source Unix-like operating system descended from Berkeley Software Distribution (BSD). It was created in 1995 when project leader Theo de Raadt forked from NetBSD. OpenBSD emphasizes portability, security, and integrated cryptography. It is developed and maintained by volunteers and finances itself through donations and selling installation media.
Top linux distributions & open source Browserspawan sharma
1. Linux distributions take the Linux kernel and combine it with other free software to create complete operating system packages. Some popular Linux distributions include Red Hat Enterprise Linux, Ubuntu, Linux Mint, OpenSUSE, CentOS, and Debian.
2. The Linux kernel was created by Linus Torvalds in 1991 building on earlier work by Richard Stallman and others. It has since been developed collaboratively by thousands of programmers worldwide.
3. The three most popular web browsers are Mozilla Firefox, Google Chrome, and Opera. Firefox has over a billion users and supports extensions and standards compliance. Chrome provides bookmarks, security features, and a store for extensions. Opera has features like speed dial and tabbed browsing.
This document provides an overview of the Linux operating system. It discusses that Linux was developed as an alternative to expensive UNIX operating systems and as a free software project. The document outlines the history from the GNU project in 1984 to Linus Torvalds developing the initial Linux kernel in 1991. It describes how Linux is now widely used on servers, supercomputers, embedded systems, and desktop computers. The key advantages of Linux discussed are that it is free, open source, powerful, stable, and secure.
I Am Linux-Introductory Module on LinuxSagar Kumar
This module covers Introduction to Linux, History of Linux, Features of Linux, Advantage of Linux, File System Hierarchy Standard, Knowing root, Linux Commands, Working with Files and Directories, etc.
Crafting GNU/Linux distributions for Embedded target from Scratch/SourceSourabh Singh Tomar
Following content is pehaps next step for Embedded Linux distribuiton process! Working out the entire process from sources. This time more elaborative.
Volunteering at YouSee on Technology SupportYouSee
This document provides instructions for volunteering to develop IT solutions for social causes using open source web application programming. It discusses installing PHP, MySQL, Apache and related tools on Windows using WAMP server or on Linux. It also covers using Git and GitHub for collaboratively developing software by forking repositories, cloning them locally, committing changes and pushing them to the remote repository. The key steps are to install necessary software, fork a project repository on GitHub, clone it locally, make code changes, commit and push them for review and merging into the master repository.
Linux has emerged as a number one choice for developing OS based Embedded Systems. Open Source development model, Customizability, Portability, Tool chain availability are some reasons for this success. This course gives a practical perspective of customizing, building and bringing up Linux Kernel on an ARM based target hardware. It combines various previous modules you have learned, by combing Linux administration, Hardware knowledge, Linux as OS, C/Computer programming areas. After bringing up Linux, you can port any of the existing applications into the target hardware.
Linux is an open source operating system kernel developed in the 1990s as a free replacement for Unix. It uses a monolithic kernel design with layered components like the GNU operating system tools. Popular Linux distributions include Ubuntu, Fedora, and Debian. Ubuntu is suitable for all users as it is easy to install, use, and has a large software library. The basic Linux file system, commands, and how to install software are described.
Linux is an operating system or a kernel. It is distributed under an open source license. Its functionality list is quite like UNIX. Linux is an operating system or a kernel which germinated as an idea in the mind of young and bright Linus Torvalds when he was a computer science student. The main advantage of Linux was that programmers were able to use the Linux Kernel to design their own custom operating systems. With time, a new range of user-friendly OS's stormed the computer world. Now, Linux is one of the most popular and widely used Kernel, and it is the backbone of popular operating systems like Debian, Knoppix, Ubuntu, and Fedora.
Getting started with setting up embedded platform requires audience to understand some of the key aspects of Linux. This presentation deals with basics of Linux as an OS, Linux commands, vi editor, Shell features like redirection, pipes and shell scripting
Volunteers can develop web applications using open source technologies like PHP, MySQL, and GitHub. WAMP is recommended for Windows to provide Apache, MySQL, PHP and PHPMyAdmin. Linux users should install Apache, PHP and MySQL separately. GIT is used for version control, and developers should fork repositories on GitHub, make changes locally, and push commits to have their code merged by the administrator.
The document provides an introduction to the Autotools build system used for cross-platform compilation of software. It discusses the main Autotools tools - Autoconf, Automake and Libtool, and how they help make software portable across operating systems by addressing differences in platforms. The document gives an overview of how Autotools works from both a user and developer perspective, and provides resources for learning more about Autotools and related standards like the GNU Coding Standards.
This document discusses improving the usability of open source software. It explores options for developing graphical user interfaces for open source projects. Developers can work on existing open source projects or start new ones focused on areas like operating systems, applications, frontends, configuration files, or translations. Popular graphical libraries like GTK+ and Qt can be used to develop GUIs, and bindings exist for languages like Python, Perl, and C++. Tools like Glade and Qt Designer allow visual design of user interfaces. Frameworks like wxWidgets aim to create truly cross-platform applications with native looks and feels. The document provides examples of "Hello World" programs using different libraries and languages.
Embedded Systems are basically Single Board Computers (SBCs) with limited and specific functional capabilities. All the components that make up a computer like the Microprocessor, Memory Unit, I/O Unit etc. are hosted on a single board. Their functionality is subject to constraints, and is embedded as a part of the complete device including the hardware, in contrast to the Desktop and Laptop computers which are essentially general purpose (Read more about what is embedded system). The software part of embedded systems used to be vendor specific instruction sets built in as firmware. However, drastic changes have been brought about in the last decade driven by the spurt in technology, and thankfully, the Moore’s Law. New, smaller, smarter, elegant but more powerful and resource hungry devices like Smart-phones, PDAs and cell-phones have forced the vendors to make a decision between hosting System Firmware or full-featured Operating Systems embedded with devices. The choice is often crucial and is decided by parameters like scope, future expansion plans, molecularity, scalability, cost etc. Most of these features being inbuilt into Operating Systems, hosting operating systems more than compensates the slightly higher cost overhead associated with them. Among various Embedded System Operating Systems like VxWorks, pSOS, QNX, Integrity, VRTX, Symbian OS, Windows CE and many other commercial and open-source varieties, Linux has exploded into the computing scene. Owing to its popularity and open source nature, Linux is evolving as an architecturally neutral OS, with reliable support for popular standards and features
The RULE project: efficient computing for all GNU/Linux usersMarco Fioretti
The RULE (Run Up to date Linux Everywhere, http://rule.zona-m.net) was an attempt to fight the waste of computer equipment with properly chosen Free Software. Since some of those needs are still valid today, here is how I presented RULE at the Rome Linux Day 2004.
Suse Studio: "How to create a live openSUSE image with OpenFOAM® and CFD tools"Baltasar Ortega
Una descripción de Suse Studio, además de una magnífica explicación de su utilización de la mano de Alberto Passalacqua.
"How to create a live
openSUSE image with OpenFOAM® and CFD tools"
Linux Operating SystemMigration ProposalCMIT 391 - Section .docxwashingtonrosy
Linux Operating System
Migration Proposal
CMIT 391 - Section # 6380
Eqbal Danish
Benefits of Linux
Linux is "Open Source", which means that anybody can build their own, slightly different, versions of Linux using the same underlying programs. People gather together their own choices of these programs and offer them to the world.
Linux is a system that converts a powerful but mindless heap of silicon into something that an ordinary user can control, and which can run programs written to a common standard.
Linux can be made even more powerful when it's packaged with GUI's, other tools and utilities.
Different people can change this code to make the system better, and even sell it if they want.
If you are technical person who enjoys technology, you can’t beat the freedom it gives you. If you are not a technical person then, once set up, you will have a more stable, reliable and secure system.
The real benefit of Linux’s community approach to software, is that the community is made up of different individuals with different tastes, etc; many of whom are developers. This means that your own installing on your system can be incredibly personal and to your tastes.
The freedom of being open source is that you are completely 100% sure of what is running on your system. In terms of privacy that is pretty good. You know that there is nothing that is spying on you for advertising, marketing and other sinister companies.
2
Linux Derivative Recommendation
For an all-round rock-solid experience for general use, Debian is the best due to its universal nature.
It runs it 10 different architectures and comes with a huge (the biggest, actually) collection of pre-compiled software in its repositories, ready to install.
Based on what packages you install or remove, you can totally transform an already installed Debian to be most suited for any kind of work.
I recommend Debian simply because it can be the best choice no matter what you want to use it for.
It is also good for network servers, popular for personal computers, and has been used as a base for many other distributions.
Arch Linux is that your system is exactly what you make it - you decide exactly which packages you want. The end result of this is that your system is custom tailored to your computing experience and necessities. This also has the added advantage of being an extremely flexible distro.
With Arch Linux, you have unlimited choices for every aspect of your machine. If you are a proponent of Free Software, you can elect to only use free packages. If you don't want or need a full desktop environment, you can elect to use a minimalistic window manager.
3
Linux Graphical Interface
When it comes to a GUI on Linux, you have a number of options and most of the distros offer multiple GUI version built in.
So depending on your taste, you’re not spoiled for choice; making your question rather redundant.
X (also called X11) is responsible for GUI in Linux.
In a typical linux mach.
Composer is a dependency manager for PHP that allows projects to declare their dependencies and automatically installs them. It downloads dependencies into a project, sets autoloading, and supports PSR-0 and PSR-4 autoloading standards. To use Composer, declare dependencies in a composer.json file using the "require" key and run composer install to download and install the dependencies.
An unique module combining various previous modules you have learnt by combing Linux administration, Hardware knowledge, Linux as OS, C/Computer programming areas. This is a complete module on Embedded OS, as of now no books are written on this with such practical aspects. Here is a consolidated material to get real hands-on perspective about building custom Embedded Linux distribution in ARM.
This 1st presentation in the training "Introduction to linux for bioinformatics" gives an introduction to Linux, and the concepts by which Linux operates.
This document provides instructions for installing the Ubuntu Linux distribution for beginners. It introduces Linux and explains why it is free and open source. It recommends choosing Ubuntu and describes how to download the installation files, burn them to a CD or DVD, and perform a full installation that will erase the existing operating system on the hard drive. The document outlines the four installation types and provides a glossary of common Linux terms for new users.
This document provides an introduction to using Linux for new users by summarizing the key steps and concepts. It explains how to choose a Linux distribution like Ubuntu, download and install it, introduces the Ubuntu desktop interface, and describes some basic applications that come preinstalled like OpenOffice, Firefox, and Rhythmbox. It also discusses different desktop environments like GNOME, KDE and XFCE that can be used.
This is a review of the book American Icon, which discussed how Alan Mulally and Ford overcame challenges that could have sunk the company, and then avoided taking a Federal bailout.
TLS 1.3 is the latest version of the Transport Layer Security protocol. It aims to improve security and performance over previous versions. Key changes include removing support for older encryption standards, simplifying the handshake process to reduce latency, improving protections against downgrade attacks, and using ephemeral Diffie-Hellman key exchange for forward secrecy instead of long-lived RSA keys. While improving security, TLS 1.3 also faces challenges around compatibility with older systems and allowing passive network monitoring.
The document discusses forward secrecy in encrypted communications. It explains that forward secrecy uses ephemeral keys for each session so that compromising a server's private key at a later date would not allow decryption of past encrypted sessions. It provides examples of Google, Twitter, and Apple implementing forward secrecy by using protocols like elliptic curve Diffie-Hellman key exchange that provide forward secrecy and help protect encrypted data even if server keys are lost or stolen in the future.
This is a description of the Diffie-Hellman-Merkle Key Exchange process, with a presentation of the essential calculations and some discussion of vulnerabilities
Password best practices and the last pass hackKevin OBrien
This presentation looks at the best practices for password security, and shows why LastPass is still one of the best tools for keeping you safe on the Internet
SSL certificates use public key encryption to verify secure connections between clients and servers. Issues arise from the hierarchical trust model where root certificate authorities can sign other certificates, and some authorities have had their private keys compromised, allowing fake certificates to be generated. Alternative approaches like Certificate Transparency aim to increase transparency and accountability over the certificate issuance and validation process.
Encryption is key to safety online, but also important offline. But how does it work? This presentation will cover the basics and help you to be safer.
The subject of passwords is important today since they protect all of your accounts, and are frequently attacked by crackers. In this presentation I examine the technology used to handle and protect passwords, and make recommendations for what the user can do to protect themselves online.
The Linux directory structure is organized with / as the root directory. Key directories include /bin and /sbin for essential system binaries, /boot for boot files, /dev for device files, /etc for configuration files, /home for user home directories, /lib for shared libraries, /media and /mnt for mounting removable media, /opt for optional application software, /proc for process information, /root for the root user's home, /tmp for temporary files, /usr for secondary hierarchy data and binaries, and /var for variable data.
This is a presentation that looks ta some of the Linux commands you could use to identify the hardware on your system. This can be useful for troubleshooting, or just for figuring out which motherboard is in which box.
This document provides guidance on diagnosing and resolving a sluggish computer running Linux. It recommends starting with software issues by using the "top" command to check for processes using significant CPU or memory resources. If closing problematic processes improves performance, a software problem exists that may require reinstalling or replacing applications. Hardware problems like CPU load, insufficient RAM causing disk swapping, and disk I/O issues are also covered, with recommendations to use commands like "top", "uptime", and "iostat" to diagnose the source before upgrading hardware.
The ps command displays information about active processes. It lists all running processes by default but additional switches provide more details. Common switches include -A for a complete list, -f for full details, and -l for a long listing. The ps command is often used to find the process ID (PID) of a frozen process so it can be killed using the kill command. Piping ps output to grep can filter results to quickly find a specific process.
Installing Linux: Partitioning and File System ConsiderationsKevin OBrien
This document discusses considerations for installing Linux, including whether to do a single Linux install, dual boot with Windows, or multiple Linux installs. It provides examples of recommended partition sizes and file system types for different scenarios, such as allocating 50GB to / and the rest to /home for a single Linux install, or using FAT32 for a partition to share data between Linux and Windows on a dual boot system. The document emphasizes backing up data and having separate partitions for /var or /home if installing on a server.
ifconfig is a command used to configure network interfaces in Linux, BSD, Solaris, and Mac OSX. It displays the status of interfaces, including the IP address, subnet mask, hardware address, and packet transmission/reception statistics. It is used at boot to configure interfaces and can also be used to view interface information or manually configure addresses, change interfaces between up/down states, and set other parameters.
The document discusses the find command in Linux, which allows users to search for files and directories based on various criteria like name, type, size, permissions and timestamps. It provides examples of find syntax and options to search for files by name, type, size and time modified. The document also explains how to use the -exec option to perform actions on the files found, like moving or deleting them.
This document discusses shortcuts for navigating the command line interface more efficiently, including autocompletion and copy/paste. It explains how autocompletion allows tabbing to fill in commands and filenames, and how copy/paste using Ctrl-Shift-C/V streamlines copying commands between programs and the terminal. Mastering these built-in shortcuts allows users to work much faster in the shell than in a GUI and do things not possible in graphical interfaces.
The Shell Game Part 3: Introduction to BashKevin OBrien
This document discusses the Bourne-Again Shell (bash) and basic navigation commands. It introduces bash as the default shell on Linux systems and explains that bash commands are built-in rather than existing as separate files. Examples of basic navigation commands like cd, pwd, and relative/absolute paths are provided. The document encourages practice of these fundamental commands and lists resources for learning more about bash.
The Shell Game Part 2: What are your shell choices?Kevin OBrien
The document discusses different types of shells available in Linux systems. It describes the original Bourne shell and the default Bourne-Again shell (bash). It also covers other shells like the Almquist shell, C shell, Korn shell, TENEX C shell, and Z shell. The document notes that users can temporarily switch shells or permanently change their default shell using the chsh command. Choosing different shells allows customizing capabilities to a user's preferences or system requirements.
The Shell Game Part 1: What is a shell?Kevin OBrien
This document provides an introduction to shells in Linux and Unix-like operating systems. It defines what a shell is, explaining that it is a program that provides the user interface through which commands can be typed to interact with and run programs on the operating system. It notes that servers commonly only use the shell interface without a graphical user interface to save resources. It discusses how shells are used for server administration tasks over secure shell (SSH) connections and why the shell interface is more powerful and efficient than a GUI. It encourages new users to take notes on useful commands they find.
UX for Data Engineers and Analysts-Designing User-Friendly Dashboards for Non...UXPA Boston
Data dashboards are powerful tools for decision-making, but for non-technical users—such as doctors, administrators, and executives—they can often be overwhelming. A well-designed dashboard should simplify complex data, highlight key insights, and support informed decision-making without requiring advanced analytics skills.
This session will explore the principles of user-friendly dashboard design, focusing on:
-Simplifying complex data for clarity
-Using effective data visualization techniques
-Designing for accessibility and usability
-Leveraging AI for automated insights
-Real-world case studies
By the end of this session, attendees will learn how to create dashboards that empower users, reduce cognitive overload, and drive better decisions.
This guide highlights the best 10 free AI character chat platforms available today, covering a range of options from emotionally intelligent companions to adult-focused AI chats. Each platform brings something unique—whether it's romantic interactions, fantasy roleplay, or explicit content—tailored to different user preferences. From Soulmaite’s personalized 18+ characters and Sugarlab AI’s NSFW tools, to creative storytelling in AI Dungeon and visual chats in Dreamily, this list offers a diverse mix of experiences. Whether you're seeking connection, entertainment, or adult fantasy, these AI platforms provide a private and customizable way to engage with virtual characters for free.
Building a research repository that works by Clare CadyUXPA Boston
Are you constantly answering, "Hey, have we done any research on...?" It’s a familiar question for UX professionals and researchers, and the answer often involves sifting through years of archives or risking lost insights due to team turnover.
Join a deep dive into building a UX research repository that not only stores your data but makes it accessible, actionable, and sustainable. Learn how our UX research team tackled years of disparate data by leveraging an AI tool to create a centralized, searchable repository that serves the entire organization.
This session will guide you through tool selection, safeguarding intellectual property, training AI models to deliver accurate and actionable results, and empowering your team to confidently use this tool. Are you ready to transform your UX research process? Attend this session and take the first step toward developing a UX repository that empowers your team and strengthens design outcomes across your organization.
Crazy Incentives and How They Kill Security. How Do You Turn the Wheel?Christian Folini
Everybody is driven by incentives. Good incentives persuade us to do the right thing and patch our servers. Bad incentives make us eat unhealthy food and follow stupid security practices.
There is a huge resource problem in IT, especially in the IT security industry. Therefore, you would expect people to pay attention to the existing incentives and the ones they create with their budget allocation, their awareness training, their security reports, etc.
But reality paints a different picture: Bad incentives all around! We see insane security practices eating valuable time and online training annoying corporate users.
But it's even worse. I've come across incentives that lure companies into creating bad products, and I've seen companies create products that incentivize their customers to waste their time.
It takes people like you and me to say "NO" and stand up for real security!
In-App Guidance_ Save Enterprises Millions in Training & IT Costs.pptxaptyai
Discover how in-app guidance empowers employees, streamlines onboarding, and reduces IT support needs-helping enterprises save millions on training and support costs while boosting productivity.
Mastering Testing in the Modern F&B Landscapemarketing943205
Dive into our presentation to explore the unique software testing challenges the Food and Beverage sector faces today. We’ll walk you through essential best practices for quality assurance and show you exactly how Qyrus, with our intelligent testing platform and innovative AlVerse, provides tailored solutions to help your F&B business master these challenges. Discover how you can ensure quality and innovate with confidence in this exciting digital era.
Longitudinal Benchmark: A Real-World UX Case Study in Onboarding by Linda Bor...UXPA Boston
This is a case study of a three-part longitudinal research study with 100 prospects to understand their onboarding experiences. In part one, we performed a heuristic evaluation of the websites and the getting started experiences of our product and six competitors. In part two, prospective customers evaluated the website of our product and one other competitor (best performer from part one), chose one product they were most interested in trying, and explained why. After selecting the one they were most interested in, we asked them to create an account to understand their first impressions. In part three, we invited the same prospective customers back a week later for a follow-up session with their chosen product. They performed a series of tasks while sharing feedback throughout the process. We collected both quantitative and qualitative data to make actionable recommendations for marketing, product development, and engineering, highlighting the value of user-centered research in driving product and service improvements.
Engaging interactive session at the Carolina TEC Conference—had a great time presenting the intersection of AI and hybrid cloud, and discussing the exciting momentum the #HashiCorp acquisition brings to #IBM."
React Native for Business Solutions: Building Scalable Apps for SuccessAmelia Swank
See how we used React Native to build a scalable mobile app from concept to production. Learn about the benefits of React Native development.
for more info : https://www.atoallinks.com/2025/react-native-developers-turned-concept-into-scalable-solution/
Is Your QA Team Still Working in Silos? Here's What to Do.marketing943205
Often, QA teams find themselves working in silos: the mobile team focused solely on app functionality, the web team on their portal, and API testers on their endpoints, with limited visibility into how these pieces truly connect. This separation can lead to missed integration bugs that only surface in production, causing frustrating customer experiences like order errors or payment failures. It can also mean duplicated efforts, communication gaps, and a slower overall release cycle for those innovative F&B features everyone is waiting for.
If this sounds familiar, you're in the right place! The carousel below, "Is Your QA Team Still Working in Silos?", visually explores these common pitfalls and their impact on F&B quality. More importantly, it introduces a collaborative, unified approach with Qyrus, showing how an all-in-one testing platform can help you break down these barriers, test end-to-end workflows seamlessly, and become a champion for comprehensive quality in your F&B projects. Dive in to see how you can help deliver a five-star digital experience, every time!
Google DeepMind’s New AI Coding Agent AlphaEvolve.pdfderrickjswork
In a landmark announcement, Google DeepMind has launched AlphaEvolve, a next-generation autonomous AI coding agent that pushes the boundaries of what artificial intelligence can achieve in software development. Drawing upon its legacy of AI breakthroughs like AlphaGo, AlphaFold and AlphaZero, DeepMind has introduced a system designed to revolutionize the entire programming lifecycle from code creation and debugging to performance optimization and deployment.
Whose choice? Making decisions with and about Artificial Intelligence, Keele ...Alan Dix
Invited talk at Designing for People: AI and the Benefits of Human-Centred Digital Products, Digital & AI Revolution week, Keele University, 14th May 2025
https://www.alandix.com/academic/talks/Keele-2025/
In many areas it already seems that AI is in charge, from choosing drivers for a ride, to choosing targets for rocket attacks. None are without a level of human oversight: in some cases the overarching rules are set by humans, in others humans rubber-stamp opaque outcomes of unfathomable systems. Can we design ways for humans and AI to work together that retain essential human autonomy and responsibility, whilst also allowing AI to work to its full potential? These choices are critical as AI is increasingly part of life or death decisions, from diagnosis in healthcare ro autonomous vehicles on highways, furthermore issues of bias and privacy challenge the fairness of society overall and personal sovereignty of our own data. This talk will build on long-term work on AI & HCI and more recent work funded by EU TANGO and SoBigData++ projects. It will discuss some of the ways HCI can help create situations where humans can work effectively alongside AI, and also where AI might help designers create more effective HCI.
🔍 Top 5 Qualities to Look for in Salesforce Partners in 2025
Choosing the right Salesforce partner is critical to ensuring a successful CRM transformation in 2025.
RFID (Radio Frequency Identification) is a technology that uses radio waves to
automatically identify and track objects, such as products, pallets, or containers, in the supply chain.
In supply chain management, RFID is used to monitor the movement of goods
at every stage — from manufacturing to warehousing to distribution to retail.
For this products/packages/pallets are tagged with RFID tags and RFID readers,
antennas and RFID gate systems are deployed throughout the warehouse
Middle East and Africa Cybersecurity Market Trends and Growth Analysis Preeti Jha
The Middle East and Africa cybersecurity market was valued at USD 2.31 billion in 2024 and is projected to grow at a CAGR of 7.90% from 2025 to 2034, reaching nearly USD 4.94 billion by 2034. This growth is driven by increasing cyber threats, rising digital adoption, and growing investments in security infrastructure across the region.
Middle East and Africa Cybersecurity Market Trends and Growth Analysis Preeti Jha
Installing Software, Part 3: Command Line
1. Installing Software Part 3
Kevin B. O'Brien
Washtenaw Linux Users Group
http://www.lugwash.org
2. Previous Presentations
● In the first presentation we focused on GUI
package managers, which are mostly just front-
ends for mid-level package managers
● This also included a detailed look at
repositories
● In the second presentation we looked at
command-line package management tools
● These included tools for working with
repositories and Washtenaw Linux Users Groupisolated packages
for installing 2
3. This Presentation
● For this presentation we will look at installing
software when you don't have a package
● This will all be command line techniques
● As before, anything to do with installing
software is an Administrative task, so you need
to either be logged in as root, or prefix your
commands with “sudo”
Washtenaw Linux Users Group 3
4. Why Install These Ways?
● The main reason for looking at these methods of
installing is that the repositories do not contain the
software you are looking to install
● It may be a small, specialized program that never
got into anyone's repository
● It may be a newer version that is not in the
repositories yet
– e.g. Firefox 3.5 is not yet (as of August 2009) in the
Ubuntu repositories, and probably won't be until
the next release in Linux Users Group
Washtenaw October 2009 4
5. Binaries vs. Source
● All software starts out as Source Code
● This is code that is written in a programming
language, like C, C++, Java, etc.
● While arcane, it is human-readable if you have
developed the skill
● It is not machine-readable, though
Washtenaw Linux Users Group 5
6. Source to Binary
● Machines (computers) can only read code that
is in zeros and ones
● This is called Binary code
● It is not human-readable
● A software program called a Compiler takes the
source code and turns it into Binary code that a
computer can read and execute
Washtenaw Linux Users Group 6
7. Compilers
● Even before Linux existed, one of the first
programs written for the Free Software world
was a compiler
● Richard M. Stallman, the founder of the GNU
project, created what is called the GNU
Compiler Collection, or gcc
● This continues to be developed and improved
Washtenaw Linux Users Group 7
8. Compiling programs
● Compiling transforms source code into binary code
● The source code can be in many possible
programming languages, so you need to have
compilers for each programming language
● Computers can have many possible architectures
(i386, 64-bit, RISC, ARM, etc.)
● Covering all of these possibilities is why we call gcc
a collection of compilers, not just a single compiler
Washtenaw Linux Users Group 8
9. Pre-compiled binaries
● When a compiler has turned source code into
binary code, the resulting file is called a binary
● Packages contain binary files, along with some
text files for configuration
● But when there is no package, sometimes you
can find a binary that has already been
compiled for you
● These most commonly have a file extension of
*.bin Washtenaw Linux Users Group 9
10. *.bin Files
● The first thing to understand is that Linux does
not let any file be executed without specific
permission to do so
● Unlike Windows, you cannot just download
any old file off the Internet and run it
● You must first, either as root or using sudo,
make the file executable
● This is done with the command
chmod +x <filename> Group
Washtenaw Linux Users 10
11. chmod
● This is short for Change MODe
● The mode, in this case, includes the
permissions for what a file is allowed to do
● The “+x” specifies that you are letting it be
executable
● So this command lets you take a binary file that
you have downloaded, and set it to be
executable
Washtenaw Linux Users Group 11
12. Installing
● Once the file is executed, you install the file
with this command
./<filename>
● That is just a period, a slash, and then the
name of the file, and Bob's your uncle!
Washtenaw Linux Users Group 12
13. Installing From Shell Scripts
● Sometimes you will have a program distributed as a
shell script
● These are files with an extension of *.sh
● But in all respects the installation is the same as with
*.bin files
● First, make the script executable using
chmod +x <filename>
● Then execute it using
./<filename>
Washtenaw Linux Users Group 13
14. Is it really this easy?
● Most of the time it is
● But if you have a problem you may need to
check a user forum or web site to get additional
help
Washtenaw Linux Users Group 14
15. Compiling Programs
● The term “Open Source” comes from the
requirement in GPL and similar licenses that the
source code of a program must be available in a
reasonable way to end users
● Some of those users may be programmers who will
modify the code in some way to fit their needs
● But most of us don't know which end of C++ is used
to drive screws, and which to pound nails
Washtenaw Linux Users Group 15
16. But many of us do it
● Still, compiling is something many of us do more
often than we may know
● For example, kernel modules have to be re-compiled
whenever the kernel changes, which happens about
once a month these days
● Example: VirtualBox
● VirtualBox has a program the recompiles the kernel
module, and after a kernel update you will get an
error message telling you to run this program when
you try to run VirtualBox Users Group
Washtenaw Linux 16
17. Isn't it hard?
● Actually, not really
● By definition, any Open source program must make
the source code available
● You can download the source code yourself and
compile it yourself
● It may even work better for you than a pre-compiled
binary
Washtenaw Linux Users Group 17
18. Benefits of compiling
● Possibly a minor one for many people, but
compiling your software can teach you a lot about
how your computer works
● The biggest benefit is that the compiled program is
tuned for your system, your CPU, your version of
Linux, etc.
● You can even (this is optional) go into the source
code and really tune it up before compiling
Washtenaw Linux Users Group 18
19. Reasons not to compile
● You must do all of the things a package
manager would do for you otherwise
– You need to resolve all of the dependencies
– You need to look for and install any bug fixes,
updates, etc. as needed
– If you change your mind, you need to manually
uninstall
Washtenaw Linux Users Group 19
20. Pre-requisites
● Before you start, make sure you have
everything you need
● This can vary by distribution in terms of what
is installed by default, and what needs to be
added by you
● For example, in Ubuntu and its derivatives
(Kubuntu, Xubuntu) you need to install a
package called build-essential to have all of the
software neededWashtenaw Linux Users Group install
to compile and 20
21. Pre-requisites 2
● In general here are some things you need to
have installed if your distro does not have a
neat package like Ubuntu does
– Headers for the Linux Kernel
– Gcc
– make
Washtenaw Linux Users Group 21
22. Get the Source Code
● The first step to compiling and installing a program
is to get the source code
● Remember, if it is an Open Source program they
must make it available
● You may have two options:
– A Source Archive – good for home users
– A CVS system of some kind – intended for
developers (e.g. SVN, GIT)
Washtenaw Linux Users Group 22
23. Firefox Source Code
● You can go to the Firefox site
https://developer.mozilla.org/en/download_moz
illa_source_code and see how they do it
● For developers, they use a CVS called
Mercurial
● For the rest of us, they provide “tarballs”,
which is the format we will cover shortly
● These are provided on an ftp server
Washtenaw Linux Users Group 23
24. Temporary working directory
● You can use a temporary working directory for
these activities
● For example you can use the ~/tmp directory
● This is the equivalent of /home/username/tmp
● However, there is a good reason to keep your
build directory after the install
● So I would suggest using something other than
the ~/tmp directory. Maybe something like
~/builds that you create in your home directory
Washtenaw Linux Users Group 24
25. Source Archives (tarballs)
● Source code is always contained in text files
● But it is not usually a single file
● Most of the time it is a collection of files
● These are put into what are called Archives, which
have an extension *.tar (Tape ARchive), a name
which comes from the days of tape backup systems
● These files are then compressed to save on
bandwidth
Washtenaw Linux Users Group 25
26. Compression of Archives
● These files are usually compressed with either
gzip or bzip2
● The extensions then become *.tar.gz, or
*.tar.bz2
● Uncompressing is then a simple matter of using
cd to go to the directory the file is in (e.g.
~/builds), and then
– tar -zxpf appname.tar.gz
– tar -jzpf appname.tar.bz2
Washtenaw Linux Users Group 26
27. Dependencies
● This is a potential problem with compiling from
tarballs
● Sometimes the developer will tell you up front
what the dependencies are
● Otherwise, you will get errors when you try to
compile
● This is often the hardest part of the process, but
not at all impossible to do
Washtenaw Linux Users Group 27
28. Three-Step Dance
● Compiling and installing software involves a
Three-Step Dance
– Configure
– Make
– Make Install
Washtenaw Linux Users Group 28
29. Configure 1
● This is where the dependencies will come up
● This is also where you can select some settings
you want to use
● When you run this, you are running a script
called “configure” that is part of the collection
you downloaded in the tarball
● In this script, the programmer(s) will do several
things
Washtenaw Linux Users Group 29
30. Configure 2
● First, they will check for the required “helper”
programs (dependencies) they need to have to run
this program
● If they do not find them, they will give an error
● If this is because the program is not on your system,
you need to install it
● Sometimes it is on your system, but not where the
script expects it to be. You should be able to tell it
the path to find the program in that case.
Washtenaw Linux Users Group 30
31. Configure 3
● This script will also have options of various kinds
● For instance, certain features of the program may
require a dependency. In this case, if you do not
need this feature(s), you can ignore it
● You may be able to specify a non-standard install
location
● You may have the option to disable features you
don't need that might make the program run more
slowly
Washtenaw Linux Users Group 31
32. Options
● The point about options is that they are optional
● You can safely ignore options
● A good rule of thumb is that if you are not sure
what the option is asking you, leave it alone
● Remember, if the program really needs
something, it won't let you get any further until
you take care of it
Washtenaw Linux Users Group 32
33. More on Options
● Optional features will generally be presented in a
format like
--disable-<feature>
--enable-<feature>
● Optional packages will generally be presented
in a format like
--with-<packagename><other options>
– Often these package option lines will have
(default:test) at the end of the line. This
tells the script Linux check and see if you 33
Washtenaw
to Users Group
have this package installed
34. Install Location
● The option that tells the script where to install
is the --prefix option
● Normally compiled software will go in the
/usr/local directory tree
– Executables go in /usr/local/bin
– Libraries go in /usr/local/lib
● Some applications (e.g. kde apps) need to be
installed in a different location
Washtenaw Linux Users Group 34
35. Example 1
● cd into the directory ~/builds/<programname>
and run
./configure --prefix=/usr –disable-
<option>
● This will run the configure script for this
program, set the install directory to /usr
(instead of /usr/local), and disable the
option selected
Washtenaw Linux Users Group 35
36. Example 2
● You should get output that looks something
like
$ checking build system type...
i686-pc-linux-gnu
$ checking host system type...i686-
pc-linux-gnu
$ checking whether build environment
is sane... yes
and so on
● If you are missing aLinux Users Group
dependency you will
Washtenaw 36
get a warning that says that the program
requires some other program or library
37. More on Dependencies
● Often what is needed here is something called a
development package
● You may already have the base package
installed, but what is needed is the associated
development package
● For example, Ubuntu uses the Gnome desktop
by default, and so gtk is certainly installed. But
you may not have libgtk2.0-dev installed,
and that is what Washtenaw Linux Users Group needs
your program 37
38. Step One: configure
● So, once you have understood about
dependencies, and you have the necessary
compilers nad utilities installed, you are ready
to do step one: configure
● I will assume you are in a directory like
~/builds/<packagename>, in which you have
the source code you downloaded (and
unzipped/extracted)
● In this directory Washtenaw Linux Users Group
type ./configure 38
39. Configure 2
● If you get errors, such as unsatisfied
dependencies, you will need to first install
those dependencies
● Then try it again
● Wash, rinse, repeat, until you no longer get any
errors
● That should mean you have now successfully
done your configure
Washtenaw Linux Users Group 39
40. The Makefile
● The output of a successful configure is a
makefile
● This file contains all of the instructions for
successfully compiling a binary
● Once you have a makefile, the hard part of
compiling from source is over. You should now
have resolved any dependency issues.
Washtenaw Linux Users Group 40
41. Step Two: make
● At this point, you are ready for the second step
of the three-step dance
● In the same directory that you were in to create
your makefile, run the make command
make
● The most common problem at this point
would be an error that says it cannot find a
makefile
● This means the firstLinux Users Group
Washtenaw step, configure, did not 41
successfully complete, so go back to step
one
42. Make 2
● You can watch the output of the make
command, but it is not necessary
● It will generate output as each source code file
is processed
● This can take seconds, minutes, or longer,
depending on the amount of source code being
processed
● I'd just get a cup of coffee
Washtenaw Linux Users Group 42
43. Checking your make
● Some open source projects have their own test
suite
● You can try this to see if your make is correct
make check
● This step is optional, but it is a good idea
● If you discover that a feature of the
program did not make correctly, but it is
one you never want to use, you can ignore
the error Washtenaw Linux Users Group 43
44. Step Three: make install
● Once the make command has finished, you are
ready to take step three: make install
● You absolutely must be root or use sudo here
● Example: sudo make install
● This will install the compiled binary in the
location you specified in the configure step
Washtenaw Linux Users Group 44
45. Uninstall
● If you change your mind, or for any reason you need
to remove the program, go back into your build
directory (e.g. ~/builds/<programname>) and run
(as root, or using sudo)
sudo make uninstall
● This is why I recommended a separate builds
directory, instead of the ~/tmp directory often
seen in how-tos
● I nevere put things I want to keep in a
temporary directory Linux Users Group
Washtenaw 45
46. Running the program
● Usually you can run the program you just
installed by typing the name of the program and
pressing enter
● If you get an error that no such file or program
could be found, that may mean you installed it
into a non-standard location
● In that case, you need to add the location to
your path
Washtenaw Linux Users Group 46
47. Path
● The PATH is defined for each user in the
/etc/profile file
● Edit this with a text editor and add the location
of your program
Washtenaw Linux Users Group 47