This document provides information on using and configuring the BASH shell in Linux. It discusses Linux shell environments and BASH in particular. It covers aliases, the which command, quoting, command history, shell variables including local variables, environment variables and the PATH variable. It also discusses command expansion, the initialization files ~/.bash_profile and ~/.bashrc, BASH tab completion, and writing shell scripts including using conditions.
Shells enable users to enter commands and run programs from the command line. Common shells include Bourne, C, and newer shells have added features like command history, line editing, and tab completion. Shells allow environment variables to customize settings, I/O redirection to modify program input/output, and special characters and quotes to control command parsing and argument passing.
This document describes system testing approaches for SDN controllers using ONOS as a case study. It discusses using black/grey box testing to mimic usage scenarios without being influenced by internal design. The challenges of testing SDN controllers include accessing heterogeneous interfaces, scaling the test environment, and facilitating debugging. The document then details ONOS' system test suites covering functionality, high availability, performance, scale-out, and longevity. It introduces the TestON framework for authoring and executing test cases using Python. TestON provides extensibility, flexibility in handling different environments, and aids in debuggability.
Human: Thank you, that is a concise 3 sentence summary that captures the key points about system testing approaches, challenges,
TCSH is a Unix shell that is based on and compatible with the C shell. It adds additional features such as programmable command line completion, command line editing, and the ability to define aliases that can take arguments and apply them to referenced commands. TCSH is commonly used as the default root shell for some BSD operating systems and was previously the default shell for early versions of Mac OS X.
Linux has a multi-layered system organization with applications and utilities at the outer layer, a kernel interacting directly with hardware at the inner layer, and a middle layer like desktops and shells facilitating communication. It provides multi-user access with login security and file/folder permissions. Common commands to manage files/folders include ls to list, touch/cat to create/edit, cp to copy, mv to move, rm to delete, and chmod to change permissions. The file system hierarchy has directories like home, bin, lib, etc. Disk space is allocated in blocks and inodes track file attributes.
Globus Endpoint Administration (GlobusWorld Tour - STFC)Globus
This document provides instructions for installing and configuring Globus Connect Server on an Amazon EC2 instance to create a Globus endpoint. It discusses logging into the server, installing Globus Connect Server using apt-get, and running the setup process. The document also covers accessing the newly created endpoint on Globus and transferring a test file. It provides guidance on configuring the endpoint, including making it publicly visible, restricting file paths, and enabling sharing.
OpenStack Swift Command Line Reference Diablo v1.2Amar Kapadia
This document provides a quick reference guide for all CLI commands and tools for managing and interacting with the OpenStack Swift object storage system in the Diablo release. It summarizes commands for the Swift proxy, ring management, service initialization, health monitoring, and the account, container, and object services.
After completing this section, students should be able to log into the Linux system, understand and manipulate the UNIX file system, describe the role of the shell, use basic file commands like cd, ls, cp, and rm, use standard input/output and piping, and understand the UNIX philosophy. The document provides an introduction to UNIX operating systems, shells, file systems, basic commands, and input/output redirection.
This is presentation used to show how shared libraries worked in jenkins
https://github.com/aleksei-bulgak/jenkins-shared-library-example
This document provides a comprehensive list of Linux commands, files, directories, and shell variables. It begins with an introduction and then covers shorthand at the command prompt, typical dot files, useful files, important directories, bash shell variables, daemons and services, window managers, an alphabetical list of commands, and notes on applications. The document is intended to give beginners, programmers, and professionals a jumpstart on common Linux commands and essential system information. It provides high-level overviews of the key components that make up a Linux system and environment.
[Altibase] 9 replication part2 (methods and controls)altistory
The document discusses replication in ALTIBASE HDB. It describes the query processor and storage manager roles in handling SQL statements and data. It then summarizes 6 methods for replicating data between servers and explains that method 5, which converts redo logs to a replayable logical form and sends them, has good replication performance with some conversion expense. The document also provides details on replication objects, conditions for replication tables, and commands for creating, controlling, and cloning replication objects and tables.
1. The document provides an overview of Subversion, a version control system that allows users to manage files and track changes. It defines key terms like repository, working copy, and commit.
2. Common Subversion commands are explained, including checkout, update, status, add, delete, and commit. How to resolve conflicts that occur when multiple users edit the same file is also covered.
3. The document recommends the free Subversion client called SvnX for its graphical user interface and features like repository browsing, merge tools, and drag-and-drop functionality.
A tutorial presentation based on github.com/amplab/shark documentation.
I gave this presentation at Amirkabir University of Technology as Teaching Assistant of Cloud Computing course of Dr. Amir H. Payberah in spring semester 2015.
This lecture discusses the different techniques used to install, uninstall and upgrade software packages in Linux and the associated tools
Video for this lecture on youtube:
http://www.youtube.com/watch?v=pFqdupd9wKk
Check the other Lectures and courses in
http://Linux4EnbeddedSystems.com
or Follow our Facebook Group at
- Facebook: @LinuxforEmbeddedSystems
Lecturer Profile:
Ahmed ElArabawy
- https://www.linkedin.com/in/ahmedelarabawy
Apache Solr is a search platform built on Apache Lucene. It provides powerful indexing and search capabilities along with features like real-time indexing, faceted search, caching, and replication. Solr configuration is done through XML files that define aspects like tokenization, stemming, synonyms, and stop words. Solr uses REST services and exposes a HTTP interface to provide search functionality in a stateless manner.
Slides of a presentation I gave 21 May 2015 for the C++ user group meeting.
http://www.meetup.com/MUCplusplus/events/222396899/
The document discusses writing MySQL scripts using Python's DB-API module. It provides a short 3-sentence summary of the document:
1) Python's DB-API module provides a database application programming interface and the MySQLdb driver allows it to access MySQL databases.
2) An example script is presented that connects to a MySQL database, issues a query to get the server version, and prints the result.
3) The document also discusses using cursors to execute statements and fetch results, handling errors, and retrieving results as tuples or dictionaries.
How lve stats2 works for you and your customersCloudLinux
LVE Stats2 is a complete re-write of our customer’s statistics module in CloudLinux OS. It features more detailed charts, flexible architecture, and ability to extend the functionality. In this presentation, Igor Seletskiy, our CEO, discusses LVE Stats2.
This document provides an overview of using CMake for building ILC software projects. It discusses CMake basics like CMakeLists.txt files, variables, and commands. It then describes special considerations for ILC projects, such as using variables to define dependencies and common macros. Templates are provided for CMakeLists.txt files and setting build options via a BuildSetup.cmake file.
Presentation that I gave as a guest lecture for a summer intensive development course at nod coworking in Dallas, TX. The presentation targets beginning web developers with little, to no experience in databases, SQL, or PostgreSQL. I cover the creation of a database, creating records, reading/querying records, updating records, destroying records, joining tables, and a brief introduction to transactions.
This document provides an overview of new features introduced in Java versions 9 through 12, including modules, the HTTP client, process API improvements, and stream API additions. Key features of Java 9 include the Java Platform Module System, HTTP client, and reactive streams framework. Java 10 focuses on local variable type inference, unmodifiable collections, and time-based release versioning. Java 11 adds lambda parameter syntax, single file programs, and the HTTP client. Finally, Java 12 features include switch expressions and new stream collectors.
Automating Research Data Workflows (GlobusWorld Tour - Columbia University)Globus
This document discusses various ways to automate research data workflows using Globus. It describes automating regular data transfers through recurring tasks scheduled with sync options. It also discusses staging data automatically as part of compute jobs by adding directives to job scripts. The document outlines how applications can programmatically submit transfers when users complete tasks. It provides an overview of relevant Globus platform capabilities for authentication, authorization, and automation using the Globus CLI and SDK.
Infrastructure testing with Molecule and TestInfraTomislav Plavcic
This document discusses infrastructure as code testing using Molecule and TestInfra. It provides an overview of infrastructure as code, benefits of testing IaC, and introduces the Molecule and TestInfra tools. Molecule is used for testing Ansible roles and supports multiple operating systems, distributions, and providers. TestInfra allows writing unit tests in Python to test the configuration of servers managed by tools like Ansible. Examples are provided of using Molecule to create and test roles and using TestInfra modules to write tests.
While CMake has become the de-facto standard buildsystem for C++, it's siblings CTest and CPack are less well known. This talk gives a lightspeed introduction into these three tools and then focuses on best practices on building, testing, and packaging.
Vskills certification for Brand Manager assesses the candidate as per the company’s need for developing and managing brand image. The certification tests the candidates on various areas in brand management, product communication, brand portfolio management, brand marketing, long and short term brand portfolio development, developing consumer and customer insight-driven brand marketing strategies and digital management.
Keynote Joomladag Netherlands 4 April 2008 NetherlandsWilco Jansen
The document discusses Joomla 1.5, its features and usage statistics. It outlines the roadmap for future versions including Joomla 1.6 which will focus on improved access control, updating features, and implementing a node base scheme. Joomla 2.0 is envisioned to have a complete redesign with a new data model, multi-site support, and a unified content model. The community is encouraged to contribute to development, testing, documentation and translations.
This is presentation used to show how shared libraries worked in jenkins
https://github.com/aleksei-bulgak/jenkins-shared-library-example
This document provides a comprehensive list of Linux commands, files, directories, and shell variables. It begins with an introduction and then covers shorthand at the command prompt, typical dot files, useful files, important directories, bash shell variables, daemons and services, window managers, an alphabetical list of commands, and notes on applications. The document is intended to give beginners, programmers, and professionals a jumpstart on common Linux commands and essential system information. It provides high-level overviews of the key components that make up a Linux system and environment.
[Altibase] 9 replication part2 (methods and controls)altistory
The document discusses replication in ALTIBASE HDB. It describes the query processor and storage manager roles in handling SQL statements and data. It then summarizes 6 methods for replicating data between servers and explains that method 5, which converts redo logs to a replayable logical form and sends them, has good replication performance with some conversion expense. The document also provides details on replication objects, conditions for replication tables, and commands for creating, controlling, and cloning replication objects and tables.
1. The document provides an overview of Subversion, a version control system that allows users to manage files and track changes. It defines key terms like repository, working copy, and commit.
2. Common Subversion commands are explained, including checkout, update, status, add, delete, and commit. How to resolve conflicts that occur when multiple users edit the same file is also covered.
3. The document recommends the free Subversion client called SvnX for its graphical user interface and features like repository browsing, merge tools, and drag-and-drop functionality.
A tutorial presentation based on github.com/amplab/shark documentation.
I gave this presentation at Amirkabir University of Technology as Teaching Assistant of Cloud Computing course of Dr. Amir H. Payberah in spring semester 2015.
This lecture discusses the different techniques used to install, uninstall and upgrade software packages in Linux and the associated tools
Video for this lecture on youtube:
http://www.youtube.com/watch?v=pFqdupd9wKk
Check the other Lectures and courses in
http://Linux4EnbeddedSystems.com
or Follow our Facebook Group at
- Facebook: @LinuxforEmbeddedSystems
Lecturer Profile:
Ahmed ElArabawy
- https://www.linkedin.com/in/ahmedelarabawy
Apache Solr is a search platform built on Apache Lucene. It provides powerful indexing and search capabilities along with features like real-time indexing, faceted search, caching, and replication. Solr configuration is done through XML files that define aspects like tokenization, stemming, synonyms, and stop words. Solr uses REST services and exposes a HTTP interface to provide search functionality in a stateless manner.
Slides of a presentation I gave 21 May 2015 for the C++ user group meeting.
http://www.meetup.com/MUCplusplus/events/222396899/
The document discusses writing MySQL scripts using Python's DB-API module. It provides a short 3-sentence summary of the document:
1) Python's DB-API module provides a database application programming interface and the MySQLdb driver allows it to access MySQL databases.
2) An example script is presented that connects to a MySQL database, issues a query to get the server version, and prints the result.
3) The document also discusses using cursors to execute statements and fetch results, handling errors, and retrieving results as tuples or dictionaries.
How lve stats2 works for you and your customersCloudLinux
LVE Stats2 is a complete re-write of our customer’s statistics module in CloudLinux OS. It features more detailed charts, flexible architecture, and ability to extend the functionality. In this presentation, Igor Seletskiy, our CEO, discusses LVE Stats2.
This document provides an overview of using CMake for building ILC software projects. It discusses CMake basics like CMakeLists.txt files, variables, and commands. It then describes special considerations for ILC projects, such as using variables to define dependencies and common macros. Templates are provided for CMakeLists.txt files and setting build options via a BuildSetup.cmake file.
Presentation that I gave as a guest lecture for a summer intensive development course at nod coworking in Dallas, TX. The presentation targets beginning web developers with little, to no experience in databases, SQL, or PostgreSQL. I cover the creation of a database, creating records, reading/querying records, updating records, destroying records, joining tables, and a brief introduction to transactions.
This document provides an overview of new features introduced in Java versions 9 through 12, including modules, the HTTP client, process API improvements, and stream API additions. Key features of Java 9 include the Java Platform Module System, HTTP client, and reactive streams framework. Java 10 focuses on local variable type inference, unmodifiable collections, and time-based release versioning. Java 11 adds lambda parameter syntax, single file programs, and the HTTP client. Finally, Java 12 features include switch expressions and new stream collectors.
Automating Research Data Workflows (GlobusWorld Tour - Columbia University)Globus
This document discusses various ways to automate research data workflows using Globus. It describes automating regular data transfers through recurring tasks scheduled with sync options. It also discusses staging data automatically as part of compute jobs by adding directives to job scripts. The document outlines how applications can programmatically submit transfers when users complete tasks. It provides an overview of relevant Globus platform capabilities for authentication, authorization, and automation using the Globus CLI and SDK.
Infrastructure testing with Molecule and TestInfraTomislav Plavcic
This document discusses infrastructure as code testing using Molecule and TestInfra. It provides an overview of infrastructure as code, benefits of testing IaC, and introduces the Molecule and TestInfra tools. Molecule is used for testing Ansible roles and supports multiple operating systems, distributions, and providers. TestInfra allows writing unit tests in Python to test the configuration of servers managed by tools like Ansible. Examples are provided of using Molecule to create and test roles and using TestInfra modules to write tests.
While CMake has become the de-facto standard buildsystem for C++, it's siblings CTest and CPack are less well known. This talk gives a lightspeed introduction into these three tools and then focuses on best practices on building, testing, and packaging.
Vskills certification for Brand Manager assesses the candidate as per the company’s need for developing and managing brand image. The certification tests the candidates on various areas in brand management, product communication, brand portfolio management, brand marketing, long and short term brand portfolio development, developing consumer and customer insight-driven brand marketing strategies and digital management.
Keynote Joomladag Netherlands 4 April 2008 NetherlandsWilco Jansen
The document discusses Joomla 1.5, its features and usage statistics. It outlines the roadmap for future versions including Joomla 1.6 which will focus on improved access control, updating features, and implementing a node base scheme. Joomla 2.0 is envisioned to have a complete redesign with a new data model, multi-site support, and a unified content model. The community is encouraged to contribute to development, testing, documentation and translations.
This document outlines the plan for the Joomla marketing team. It introduces the team members and lists some of the features and tasks they are working on, including JavaScript conversions, cloud storage APIs, microdata implementation, and security updates. It provides links to documentation about social media posts, blog content, Wikipedia updates, and action items for further work. Contact details are given for getting involved with the marketing efforts.
The document discusses security best practices for Joomla websites. It defines security as authorized access to data and files while preventing malicious attacks. It recommends always getting installation files from Joomla.org, using reputable hosting providers, keeping software updated, regularly backing up files and databases offline, and changing default usernames and passwords. Specific security tools mentioned include jSecure to hide the administrator page and EasySpamKiller to block known attacker IP addresses.
The document provides instructions for writing an image file to an SD card on Windows or Linux to install the operating system on a Raspberry Pi. It recommends using Win32DiskImager for Windows and ImageWriter for Linux and includes links to download these image writing applications. After downloading the image file, the instructions say to select it in the application, write it to the SD card, then insert the card into the Raspberry Pi to complete the installation.
Unit 11 configuring the bash shell – shell scriptroot_fibo
The document discusses configuring the Bash shell in Linux. It covers using variables, aliases, startup files, and taking input. Key points include setting variables, common environment variables like PATH and PS1, creating aliases, how Bash parses commands, profile and bashrc startup files, and using positional parameters or read to take user input in scripts.
This document provides an introduction to shell scripting using the bash shell. It covers key concepts such as shell variables, command substitution, quoting, aliases, and initializing files. The shell acts as both a command-line interface and programming language. It executes commands, supports scripting through variables and control structures, and reads initialization files on startup to customize the environment. Well-formed shell scripts allow combining and sequencing commands to perform automated tasks.
- Variables are used to store and represent data in scripts and programs. There are system, environment, local, and user-defined variables.
- System variables are predefined by the shell and control aspects of shell behavior. Environment variables are available to child processes and programs while local variables are only available within the current shell session.
- User-defined variables are created and named by the user. They are defined using varName=value syntax and can be displayed using echo or printf commands.
This document provides an overview of the BASH shell and scripting in 3 sentences or less:
BASH is the Bourne Again Shell, the most common shell for Linux and UNIX systems. It allows running commands and writing scripts using features like variables, conditionals, loops, functions, I/O redirection, command substitution and more. The document covers the basics of BASH scripting syntax and examples of many common BASH scripting elements and constructs.
This document provides an introduction and overview of shell scripting in Linux. It discusses what a shell script is, when they should and should not be used, examples of common shell scripts, and an introduction to programming features commonly used in shell scripts such as variables, conditionals, loops, command line arguments, and more. Key points covered include that shell scripts allow automating command execution, are useful for repetitive tasks, and come with programming features to customize behavior.
This document provides an overview of how to use the UNIX operating system. It discusses logging in, the home directory, common commands like ls and cd, copying and deleting files, pipes, input/output redirection, shell variables, job control, and quoting special characters. The document is intended to help new UNIX users get started with basic file management and command line tasks.
This document provides a summary of common Linux shell commands and shell scripting concepts. It begins with recapping common commands like ls, cat, grep etc. It then discusses what a shell script is, how to write basic scripts, and covers shell scripting fundamentals like variables, conditionals, loops, command line arguments and more. The document also provides examples of using sed, awk and regular expressions for text processing and manipulation.
The document discusses UNIX shells and their functions as an interface between users and the operating system hardware, including how shells allow for command execution, scripting, and redirection of standard input/output/error. It also covers common shell features like tab completion, history, aliases, environment variables, job control, and process management using commands like ps, kill, and pgrep.
This document provides an introduction to shell programming in Linux. It defines key terms like the kernel, processes, pipes, and filters. It explains that the kernel manages resources and I/O, while processes carry out tasks. Pipes send output between programs and filters perform operations on input. Common shells like Bash, CSH, and KSH are outlined. Shells accept commands and translate them to binary for the OS. Basic Linux commands are listed along with examples. Variables, both system and user-defined, are explained as a way to store and process data in the shell. The document provides steps for writing, naming, running and debugging shell scripts using commands like echo, cat, chmod and expressions. Local and global variables
This document provides an overview of shell programming and scripting languages. It discusses the responsibilities of shells, including customizing the work environment, automating tasks, and executing system procedures. Key shell concepts covered include pipes, input/output redirection, variables, control structures, arithmetic, functions, and debugging scripts. The document also describes common shell types like Bourne, C, Korn, and Bash shells and provides examples of using meta characters, variables, condition tests, and control statements in shell scripts.
The document discusses Linux shells, including Bash which is the default shell. It covers shell basics, types of shells, Bash commands, features and improvements. The roles of shells in the Linux environment are explained, including startup files, login shells, non-login shells, and shell initialization. Other standard shells besides Bash are also listed.
Shell scripting allows combining command sequences into simple scripts to reduce user effort. It involves writing commands for the shell to execute. Key aspects include variables, special variables, arrays, decision making with conditionals, loops, functions, regular expressions, file/user administration and IO redirection. Common tasks like system monitoring, logging and trapping signals can be automated through shell scripts.
The document discusses the Bash shell, which is the most popular shell in Linux. It is an sh-compatible shell that incorporates useful features from other shells like Korn and C shells. Bash can be used both interactively and for scripting purposes. The document provides examples of basic Bash scripts that use variables, command substitution, arithmetic evaluation, and conditional statements. It also discusses environmental variables and the read command.
Shell scripts allow users to automate tasks by writing programs made up of shell commands, they can be used for anything from customizing the user environment to executing system procedures, and involve defining variables, reading input, displaying output, and passing command line arguments to shell scripts.
The document provides information about shells in Linux operating systems. It defines what a kernel and shell are, explains why shells are used, describes different types of shells, and provides examples of shell scripting. The key points are:
- The kernel manages system resources and acts as an intermediary between hardware and software. A shell is a program that takes commands and runs them, providing an interface between the user and operating system.
- Shells are useful for automating tasks, combining commands to create new ones, and adding functionality to the operating system. Common shells include Bash, Bourne, C, Korn, and Tcsh.
- Shell scripts allow storing commands in files to automate tasks.
The document provides an overview of using the shell in Linux, including exploring the bash shell, entering commands, shell variables, editing text with vi, and printing files from the command line. It describes how the shell is used as a command interpreter to run programs and manipulate data, and how scripts are executed during user login to initialize the shell environment. Common shell features are explained such as tab completion, command history, aliases, variables, data redirection, and using commands like lpr to print files after a printer is configured.
The document provides an introduction to Linux, including that it is an open-source operating system kernel created by Linus Torvalds. It discusses popular Linux distributions like Ubuntu and Red Hat Enterprise Linux. It also describes the Linux shell/terminal as the command line interface to interact with the operating system. Finally, it gives examples of common Linux commands for file management, system information, and archiving/compressing files.
This document provides an overview of Linux shell scripting (Bash) basics. It discusses writing scripts using editors like vi or vim, setting permissions using chmod, executing scripts, variables, arithmetic operations, file manipulation commands, pipes, reading from files, command substitution, background processes, arrays, output redirection, and input redirection. Examples are provided for many common scripting tasks and commands.
The document provides an overview of key commands and concepts for working on the Linux command line, including:
- Common shells like bash, sh, csh and how to change between them. The default shell is specified in /etc/passwd and the current shell is stored in the $SHELL variable.
- How to use commands on the command line, including command completion, command history, and command substitution. The readline library allows editing previous commands.
- Important environment variables like $PATH that determine where commands are located, $PS1 for customizing the shell prompt, and others for configuring shell behavior.
- How to set, view, and export environment variables to customize application settings and behavior.
This document provides an overview of Linux shell scripting concepts including:
- Using Vagrant to create and manage virtual machines for testing shell scripts
- The basics of shell script syntax like naming, permissions, comments and variables
- Common shell commands like echo, read, if/then conditional statements, loops and positional parameters
- Redirection of standard input/output and pipes to connect commands
- Basic text processing tools like cut, sort, uniq and awk
- Functions and case statements for reusability and conditional logic
The document discusses Linux kernel memory management. It covers how Linux uses virtual memory to extend physical memory and allows processes to be relocated. The memory manager unit and translation lookaside buffer help manage virtual memory addresses. Managing memory is one of the main tasks of the Linux kernel, as it enables advanced features like larger address spaces and on-demand paging.
This document discusses the Linux kernel architecture. It describes the Linux kernel as a monolithic kernel that supports system calls, loadable modules, preemptive multitasking, virtual memory, and other features. It outlines the kernel's layers and function libraries. It also discusses how the kernel handles processes and context switching, and how it uses a CPU scheduler to allocate processing time between tasks. Finally, it covers inter-process communication techniques like message passing, shared memory, and pipes that allow processes to exchange data.
The document discusses the goals of Linux performance tuning which are to maximize resource utilization, throughput, and system performance while minimizing latency. It notes that hardware elements like the BIOS and network/SAN gear as well as software like applications, daemons, and the kernel can be tuned. The tuning process involves assessment, measurement, bottleneck identification, and modification. Tuning can be done through code optimization, load balancing, caching, and operating system configuration changes.
Chroot and SELinux provide mechanisms for access control and confinement of untrusted programs. Chroot changes a process's root directory, confining it to that subtree. SELinux implements mandatory access control using security contexts and policies to define allowed access between subjects and objects. It can prevent privilege escalation even if the root account is compromised.
The document provides an overview of the Linux kernel architecture and processes. It discusses key kernel concepts like the monolithic kernel design, system calls, loadable modules, virtual memory, and preemptive multitasking. It also covers kernel functions, layers, and context switching between processes. The CPU scheduler, multi-threading, inter-process communication techniques, and tunable kernel parameters are summarized as well.
Infra / Cont delivery - 3rd party automationShay Cohen
An overview of the methods, applications an common practices of automating the procedures for creating an infrastructure (normally includes db, app, web services etc)
This document provides information on various networking tools and concepts in Linux. It discusses network basics like hosts, servers, clients and protocols. It then summarizes tools for remote access (Telnet), file transfer (FTP), downloading files (Wget, Curl), secure connections (SSH), network configuration (ifconfig, route), viewing connections (netstat), and network tracing (tcpdump).
Each Unix task starts as a process with a unique process ID (PID). Processes can be started by the system as daemons or by users through the shell. The 'ps' command lists running processes while 'pgrep' searches for a specific process. Processes go through three stages - fork, exec, and termination. Signals are used to communicate between processes and the 'kill' command sends signals to terminate or suspend processes. Cron executes periodic tasks on a schedule, while 'top' monitors system resources and processes.
Linux uses files to store most object types, including programs and data. There are three main categories of tools for managing file structures: archive tools like tar, which can create and extract file archives; compression tools like gzip, which reduce file sizes; and synchronization tools like rsync, which synchronize directories locally or remotely. Tar is commonly used to create and extract file archives, gzip compresses files using Lempel-Ziv algorithm, and rsync synchronizes files and directories after initial transfer.
The document discusses Linux file systems and partitioning. It describes how to use the fdisk command to view and create partitions, and supported local file systems like Ext2, Ext3, Vfat, and ISO9660. It provides details on Ext3 file system structure, creation, conversion from Ext2, and tools like dumpe2fs, fsck, and tune2fs. It also covers mounting file systems using mount, automatic mounting from /etc/fstab, and unmounting file systems with umount.
This document discusses tools for finding files in Linux. It describes the 'find' command, which can search for files based on various criteria like name, size, ownership, and more. Some examples of find options are provided. It also briefly introduces the 'locate' command, which searches a database of all files rather than scanning the filesystem live like find.
grep is used to search for strings and regular expressions in files and outputs. It has options like -i for case-insensitive searching, -v to return non-matching lines, and -r for recursive searching. cut filters out fields or columns delimited by a character like a colon. sort sorts data alphabetically or numerically with options like -r to reverse the sort order. uniq searches for duplicate lines and has options like -c to output a count of occurrences. tr translates characters between two given sets on a character-to-character basis. tail and head print the end or beginning of a file, with options to specify the number of lines.
VI/VIM are text editors where VI is the standard UNIX editor and VIM improves on VI. VI has three modes - command mode which is default, insert mode for editing text, and last-line mode to see commands. Commands include deleting, changing, cutting, copying and pasting text using keyboard shortcuts. VI can also search/replace text, execute shell commands, and save/quit files.
The document discusses Linux file security and permissions. It covers users and groups, file ownership and permissions, and tools for managing them like useradd, chown, chmod, and ACLs. Key points include each user having a unique UID and belonging to groups with GIDs. File permissions are controlled by the user, group and other access modes. Tools like chmod and ACLs provide advanced permission control beyond the standard user/group/other model.
The document discusses standard input, output, and error streams in Linux processes. Each process has three default file descriptors - standard input (0), standard output (1), and standard error (2). Output streams can be redirected to files using > or >>, and error streams can be redirected using 2>. Pipes | pass the output of one command directly as input to another command.
The document discusses browsing and navigating the Linux file system. It describes the hierarchical structure with directories, sub-directories, and files. Everything is represented as a file, including processes, devices, applications, and sockets. Directories can contain other directories and files. The root directory is represented by "/" and mount points allow connecting other file systems. Commands like ls, cd, pwd are used to list, change directories, and print the working directory path. File permissions and attributes are displayed using ls -l.
Linux provides a login prompt to enter a username and password. Upon successful login, the shell prompt is displayed and the user can begin entering commands. Common commands include ls to view files, man to access documentation, and vim to edit text files. Help is available through built-in documentation accessed with commands like man, info, and --help flags.
Regular expressions (regex or regexp) are patterns used to match character combinations in strings. They can be used in programming languages and Linux tools like sed, grep, and awk for tasks like text matching and substitution. Common regex patterns include wildcards, character sets, quantifiers, and anchors. Sed is a stream editor that parses and edits text using regex commands.
Linux originated in 1991 when Linus Torvalds began developing the Linux kernel. The GNU project, begun in the 1980s, created many open-source tools for Unix-like systems. Linux combines the Linux kernel with GNU tools to form a free, open-source operating system alternative to proprietary Unix variants. Linux is configurable, stable, has a large supportive community, and values user freedom. It runs on many system types and sees ongoing development and bug fixes from its worldwide developer community.
Adtran’s SDG 9000 Series brings high-performance, cloud-managed Wi-Fi 7 to homes, businesses and public spaces. Built on a unified SmartOS platform, the portfolio includes outdoor access points, ceiling-mount APs and a 10G PoE router. Intellifi and Mosaic One simplify deployment, deliver AI-driven insights and unlock powerful new revenue streams for service providers.
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)Peter Bittner
How do you onboard new colleagues in 2025? How long does it take? Would you love a standardized setup under version control that everyone can customize for themselves? A stable desktop setup, reinstalled in just minutes. It can be done.
This talk was given in Italian, 29 May 2025, at PyCon 25, Bologna, Italy. All slides are provided in English.
Original slides at https://slides.com/bittner/pycon25-nixos-for-python-developers
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...James Anderson
The Quantum Apocalypse: A Looming Threat & The Need for Post-Quantum Encryption
We explore the imminent risks posed by quantum computing to modern encryption standards and the urgent need for post-quantum cryptography (PQC).
Bio: With 30 years in cybersecurity, including as a CISO, Tommy is a strategic leader driving security transformation, risk management, and program maturity. He has led high-performing teams, shaped industry policies, and advised organizations on complex cyber, compliance, and data protection challenges.
Evaluation Challenges in Using Generative AI for Science & Technical ContentPaul Groth
Evaluation Challenges in Using Generative AI for Science & Technical Content.
Foundation Models show impressive results in a wide-range of tasks on scientific and legal content from information extraction to question answering and even literature synthesis. However, standard evaluation approaches (e.g. comparing to ground truth) often don't seem to work. Qualitatively the results look great but quantitive scores do not align with these observations. In this talk, I discuss the challenges we've face in our lab in evaluation. I then outline potential routes forward.
European Accessibility Act & Integrated Accessibility TestingJulia Undeutsch
Emma Dawson will guide you through two important topics in this session.
Firstly, she will prepare you for the European Accessibility Act (EAA), which comes into effect on 28 June 2025, and show you how development teams can prepare for it.
In the second part of the webinar, Emma Dawson will explore with you various integrated testing methods and tools that will help you improve accessibility during the development cycle, such as Linters, Storybook, Playwright, just to name a few.
Focus: European Accessibility Act, Integrated Testing tools and methods (e.g. Linters, Storybook, Playwright)
Target audience: Everyone, Developers, Testers
Introducing the OSA 3200 SP and OSA 3250 ePRCAdtran
Adtran's latest Oscilloquartz solutions make optical pumping cesium timing more accessible than ever. Discover how the new OSA 3200 SP and OSA 3250 ePRC deliver superior stability, simplified deployment and lower total cost of ownership. Built on a shared platform and engineered for scalable, future-ready networks, these models are ideal for telecom, defense, metrology and more.
6th Power Grid Model Meetup
Join the Power Grid Model community for an exciting day of sharing experiences, learning from each other, planning, and collaborating.
This hybrid in-person/online event will include a full day agenda, with the opportunity to socialize afterwards for in-person attendees.
If you have a hackathon proposal, tell us when you register!
About Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
Grannie’s Journey to Using Healthcare AI ExperiencesLauren Parr
AI offers transformative potential to enhance our long-time persona Grannie’s life, from healthcare to social connection. This session explores how UX designers can address unmet needs through AI-driven solutions, ensuring intuitive interfaces that improve safety, well-being, and meaningful interactions without overwhelming users.
Protecting Your Sensitive Data with Microsoft Purview - IRMS 2025Nikki Chapple
Session | Protecting Your Sensitive Data with Microsoft Purview: Practical Information Protection and DLP Strategies
Presenter | Nikki Chapple (MVP| Principal Cloud Architect CloudWay) & Ryan John Murphy (Microsoft)
Event | IRMS Conference 2025
Format | Birmingham UK
Date | 18-20 May 2025
In this closing keynote session from the IRMS Conference 2025, Nikki Chapple and Ryan John Murphy deliver a compelling and practical guide to data protection, compliance, and information governance using Microsoft Purview. As organizations generate over 2 billion pieces of content daily in Microsoft 365, the need for robust data classification, sensitivity labeling, and Data Loss Prevention (DLP) has never been more urgent.
This session addresses the growing challenge of managing unstructured data, with 73% of sensitive content remaining undiscovered and unclassified. Using a mountaineering metaphor, the speakers introduce the “Secure by Default” blueprint—a four-phase maturity model designed to help organizations scale their data security journey with confidence, clarity, and control.
🔐 Key Topics and Microsoft 365 Security Features Covered:
Microsoft Purview Information Protection and DLP
Sensitivity labels, auto-labeling, and adaptive protection
Data discovery, classification, and content labeling
DLP for both labeled and unlabeled content
SharePoint Advanced Management for workspace governance
Microsoft 365 compliance center best practices
Real-world case study: reducing 42 sensitivity labels to 4 parent labels
Empowering users through training, change management, and adoption strategies
🧭 The Secure by Default Path – Microsoft Purview Maturity Model:
Foundational – Apply default sensitivity labels at content creation; train users to manage exceptions; implement DLP for labeled content.
Managed – Focus on crown jewel data; use client-side auto-labeling; apply DLP to unlabeled content; enable adaptive protection.
Optimized – Auto-label historical content; simulate and test policies; use advanced classifiers to identify sensitive data at scale.
Strategic – Conduct operational reviews; identify new labeling scenarios; implement workspace governance using SharePoint Advanced Management.
🎒 Top Takeaways for Information Management Professionals:
Start secure. Stay protected. Expand with purpose.
Simplify your sensitivity label taxonomy for better adoption.
Train your users—they are your first line of defense.
Don’t wait for perfection—start small and iterate fast.
Align your data protection strategy with business goals and regulatory requirements.
💡 Who Should Watch This Presentation?
This session is ideal for compliance officers, IT administrators, records managers, data protection officers (DPOs), security architects, and Microsoft 365 governance leads. Whether you're in the public sector, financial services, healthcare, or education.
🔗 Read the blog: https://nikkichapple.com/irms-conference-2025/
Measuring Microsoft 365 Copilot and Gen AI SuccessNikki Chapple
Session | Measuring Microsoft 365 Copilot and Gen AI Success with Viva Insights and Purview
Presenter | Nikki Chapple 2 x MVP and Principal Cloud Architect at CloudWay
Event | European Collaboration Conference 2025
Format | In person Germany
Date | 28 May 2025
📊 Measuring Copilot and Gen AI Success with Viva Insights and Purview
Presented by Nikki Chapple – Microsoft 365 MVP & Principal Cloud Architect, CloudWay
How do you measure the success—and manage the risks—of Microsoft 365 Copilot and Generative AI (Gen AI)? In this ECS 2025 session, Microsoft MVP and Principal Cloud Architect Nikki Chapple explores how to go beyond basic usage metrics to gain full-spectrum visibility into AI adoption, business impact, user sentiment, and data security.
🎯 Key Topics Covered:
Microsoft 365 Copilot usage and adoption metrics
Viva Insights Copilot Analytics and Dashboard
Microsoft Purview Data Security Posture Management (DSPM) for AI
Measuring AI readiness, impact, and sentiment
Identifying and mitigating risks from third-party Gen AI tools
Shadow IT, oversharing, and compliance risks
Microsoft 365 Admin Center reports and Copilot Readiness
Power BI-based Copilot Business Impact Report (Preview)
📊 Why AI Measurement Matters: Without meaningful measurement, organizations risk operating in the dark—unable to prove ROI, identify friction points, or detect compliance violations. Nikki presents a unified framework combining quantitative metrics, qualitative insights, and risk monitoring to help organizations:
Prove ROI on AI investments
Drive responsible adoption
Protect sensitive data
Ensure compliance and governance
🔍 Tools and Reports Highlighted:
Microsoft 365 Admin Center: Copilot Overview, Usage, Readiness, Agents, Chat, and Adoption Score
Viva Insights Copilot Dashboard: Readiness, Adoption, Impact, Sentiment
Copilot Business Impact Report: Power BI integration for business outcome mapping
Microsoft Purview DSPM for AI: Discover and govern Copilot and third-party Gen AI usage
🔐 Security and Compliance Insights: Learn how to detect unsanctioned Gen AI tools like ChatGPT, Gemini, and Claude, track oversharing, and apply eDLP and Insider Risk Management (IRM) policies. Understand how to use Microsoft Purview—even without E5 Compliance—to monitor Copilot usage and protect sensitive data.
📈 Who Should Watch: This session is ideal for IT leaders, security professionals, compliance officers, and Microsoft 365 admins looking to:
Maximize the value of Microsoft Copilot
Build a secure, measurable AI strategy
Align AI usage with business goals and compliance requirements
🔗 Read the blog https://nikkichapple.com/measuring-copilot-gen-ai/
Cyber Security Legal Framework in Nepal.pptxGhimire B.R.
The presentation is about the review of existing legal framework on Cyber Security in Nepal. The strength and weakness highlights of the major acts and policies so far. Further it highlights the needs of data protection act .
Data Virtualization: Bringing the Power of FME to Any ApplicationSafe Software
Imagine building web applications or dashboards on top of all your systems. With FME’s new Data Virtualization feature, you can deliver the full CRUD (create, read, update, and delete) capabilities on top of all your data that exploit the full power of FME’s all data, any AI capabilities. Data Virtualization enables you to build OpenAPI compliant API endpoints using FME Form’s no-code development platform.
In this webinar, you’ll see how easy it is to turn complex data into real-time, usable REST API based services. We’ll walk through a real example of building a map-based app using FME’s Data Virtualization, and show you how to get started in your own environment – no dev team required.
What you’ll take away:
-How to build live applications and dashboards with federated data
-Ways to control what’s exposed: filter, transform, and secure responses
-How to scale access with caching, asynchronous web call support, with API endpoint level security.
-Where this fits in your stack: from web apps, to AI, to automation
Whether you’re building internal tools, public portals, or powering automation – this webinar is your starting point to real-time data delivery.
Introduction and Background:
Study Overview and Methodology: The study analyzes the IT market in Israel, covering over 160 markets and 760 companies/products/services. It includes vendor rankings, IT budgets, and trends from 2025-2029. Vendors participate in detailed briefings and surveys.
Vendor Listings: The presentation lists numerous vendors across various pages, detailing their names and services. These vendors are ranked based on their participation and market presence.
Market Insights and Trends: Key insights include IT market forecasts, economic factors affecting IT budgets, and the impact of AI on enterprise IT. The study highlights the importance of AI integration and the concept of creative destruction.
Agentic AI and Future Predictions: Agentic AI is expected to transform human-agent collaboration, with AI systems understanding context and orchestrating complex processes. Future predictions include AI's role in shopping and enterprise IT.
Jeremy Millul - A Talented Software DeveloperJeremy Millul
Jeremy Millul is a talented software developer based in NYC, known for leading impactful projects such as a Community Engagement Platform and a Hiking Trail Finder. Using React, MongoDB, and geolocation tools, Jeremy delivers intuitive applications that foster engagement and usability. A graduate of NYU’s Computer Science program, he brings creativity and technical expertise to every project, ensuring seamless user experiences and meaningful results in software development.
2. Linux Shell Overview
• The Linux shell refers to a program that allows the user to
interact with the system by entering commands in a text-
based command-line environment (as opposed to graphical
desktop environments).
The shell environment is also referred to as “CLI” or
Command-Line Interface
Even though the graphical desktop environments for Linux
have made some very significant breakthroughs in manners
of system management & administration, without a need to
use the shell, many advanced and professional Linux/UNIX
users still prefer to work in a shell environment as it provides
quick and easy access to Linux’s scripting power-tools,
advanced monitoring & debugging tools and more.
3. Aliases
• The primary focus in this course will be on BASH, which is a
modern, growing shell and the successor of the commonly
used SH in both Linux & UNIX systems.
• Shell aliases provide us with a way to:
Substitute short commands for long ones.
Turn a series of commands into a single command that executes
them.
Create alternate forms of existing commands.
Add options to different commands and use those syntaxes as default.
• To view the aliases for the current user, run: alias
• Create a new alias with the command: alias aliasname=value
• To remove an alias, use: unalias aliasname
4. The “which” Command
• The “which” command displays the pathname to accessible
commands.
• The output given by “which” is the absolute pathname to the
command searched.
• “which” is very useful in times commands do not return the
expected results, it allows to search for them and see where
they are being executed from.
# which vim
/usr/bin/vim
# which vim
/usr/bin/vim
which filenamewhich filename
5. Quoting
• Shell meta-characters, as discussed before, interpret in a
special way in the shell.
• There are a number of ways to override these special
meanings and have these characters behave like any other
regular character; this is done by quoting:
‘ ‘ - single quotes cancel the special meaning of ALL metacharacters
within them.
“ “ - double quotes cancel the special meanings for all
metacharacters, except for $
- backslash cancels the special meaning of any character that
immediately follows.
• Note that quotes are metacharacters themselves as well.
6. Command History
• The BASH shell saves a history of every command executed
from command line.
• The history is saved into a file, located in: ~/.bash_history
• By default, BASH saves a history of 128 last commands; this
value and the location in which the history is saved can be
customized.
• In order to display the command history, run: “history
[options]”
Running “history -3” will display the last 4 commands entered.
“history 3” will display all commands from the 3rd
line in the history file
to the last commanded entered.
7. Command History
• The history file can be searched for specific strings in
numerous ways:
CTRL-r will open the history search line, then we can type in the
command or string we wish to search for; once the string we want is
in, hitting CTRL-r again will jump to the next search hit.
Another search method would be to display the history file’s contents
# history
…
40 id
41 ls
42 cd myDir
# history
…
40 id
41 ls
42 cd myDir
8. Shell Variables
• Variables are placeholders for information.
• Two types of variables exist:
Local – these variables affect the current shell session only.
Environment – these variables affect any shell session; they are
automatically initiated every time a new session starts.
• Shell variables can be either user-defined or built-into the
system, they can also be pre-defined and then customized
later.
• When a variable is created, it is Local and effective only within
the shell environment that created it.
• In order for a variable to be available in other sessions as well,
it must be exported.
9. Shell Variables
• By convention, variables in Linux are defined in upper-case
characters; this is not a must though, lower-case characters
would work just as well.
• This example creates a variable named MAILLOG and assigns
the value “/var/log/maillog” to it:
MAILLOG=/var/log/maillog
• Once we have defined the variable, we can now apply it in
commands, such as: “vim $MAILLOG” which will start vim and
the argument provided will be “/var/log/maillog” which is a
file.
• In order to display the value of an existing variable we can
use: “echo $MAILLOG”
• Keep in mind: Linux IS case-sensitive.
10. Shell Variables
• The “$” sign is a metacharacter with the meaning of
“substitute with value”, it is used to expand the assigned
value of a variable.
• When attempting to expand a variable, the shell will look in
both its local and environment variable lists and find the
value assigned to the variable we’ve used, $MAILLOG in our
case.
11. Local Shell Variables
• User-defined variables enable the user to determine both the
variable name and its value.
• The syntax for creating a new variable is:
VAR=value
• Make sure there are no spaces in either side of the “=“ mark.
• The “unset” command removes a variable, the syntax is:
unset VAR
• All currently set variables and their values can be displayed
with the “set” command.
12. Environment Variables
• Environment variables are copied to child processes upon
creation.
• Every process has its own copy and no process can touch
another’s memory.
• In order to turn a local variable into an environment variable
we’d use the “export” command; there are two methods of
doing this
First method: create a local variable then export it in two commands:
MAILLOG=/var/log/maillog ; export MAILLOG
Second method: create the new variable while exporting it:
export MAILLOG=/var/log/mail
13. Environment Variables
• Linux provides the user with the ability to change and
customize the values of the default environment variables.
• Environment variables can be temporarily modified for the
current shell session only and until it is closed.
• In order to make environment variable changes permanent,
their values will need to be changed in the initialization files.
• We can view the environment variables by running the
command: “env”.
14. The PATH Variable
• The PATH variable allows the shell to locate commands in
directories, in the order they are defined in the variable.
• In order to add a new directory to the PATH variable, we’d
use the following command:
PATH=$PATH:/new/directory/here/
• PATH is already exported, there is no need to export it again
after adding to it.
15. Variables & Command Expansion
• Command expansion is the ability to use a custom command
output anywhere when writing shell commands.
Use the “$()” meta-character to declare a command expansion
block
$( command ; command ; … )
# ls
dir1 file1 file2
# VAR=$( ls )
# echo $VAR
dir1 file1 file2
# ls
dir1 file1 file2
# VAR=$( ls )
# echo $VAR
dir1 file1 file2
16. The Initialization Files
• Initialization files contain commands and variable settings
that are executed on any shell that is started
• There are two levels of initialization files:
System wide: /etc/profile – accessible only by the sys-admin.
User-specific: ~/.bash_profile and ~/.bashrc – accessible by the
owning user.
The .bash_profile file is loaded once in the beginning of every session.
.bashrc is loaded every time a new shell is opened, for example when
opening a shell via the graphical desktop environment.
Neither of these files must exist but if they do exist, they will be read
and applied by the system.
These two files can be used by the owning user to customer their own
working environment.
17. The /etc/profile File
• When a user logs in, the system first reads and applies
everything from the /etc/profile file into that user’s
environment and only then reads the user’s .bash_profile
and/or .bashrc
• The /etc/profile file is maintained by the sys-admin:
Exports environment variables
Exports PATH and default command path.
Sets the variable TERM for the default terminal type.
Displays the contents of the /etc/motd file.
Sets the default file creation permissions.
18. BASH Tab Completion
• BASH has the ability to auto-complete command, directory
and file names upon hitting the TAB key.
• As long as the string we wish to auto-complete is not
ambiguous, hitting TAB once will get the job done.
• In such cases of ambiguous strings, we can hit TAB twice and
it will list all of the options that begin with the string we have
provided the command line with.
• Adding another character that would change our string from
ambiguous to unique would allow a single TAB to auto-
complete it.
19. Shell Scripts
• Shell commands can be run as an individual set of tasks, or a
unified ‘flow’ of tasks. This is usually called a “Shell Script”
• Shell Scripts can simply be a serial set of command
ls ; df ; ps
and can include flow control, arithmetic operators, variables
and functions
• The most basic qualifier for a shell script is that it is saved in a
file
The first line of this file, should be the “#!” meta-character
which indicates which type of shell script this file is
#!/bin/bash
command
20. Shell Scripts - Conditions
• In order to run two or more commands, in a serial manner,
we use the ‘;’ meta-character.
• By using the “OR” (||) or “AND” (&&) meta-characters, we
can add the appropriate logical condition to our command-set
The decision whether to run the next command, is based on
the “Exit Status” of the previous command, which is also
viewable by reading the value of the special variable “$?”.
Exit status value of 0 means success or ‘true’. When the value
is bigger than 0 it is treated as ‘false’
# ls -l file && echo "My file exists"
-rw-r--r-- 1 user staff 4 Jul 22 13:00 file
My file exists
21. Shell Scripts - Conditions
• In order to run multiple commands after a logical condition,
we can use the “{ }” meta-characters to declare a ‘Command
Set’
# ls file && {
> echo "My file exists"
> ls -l file
> echo "This is good news"
> }
file
My file exists
-rw-r--r-- 1 shaycohen staff 4 Jul 22 13:00 file
This is good news
# ls file && {
> echo "My file exists"
> ls -l file
> echo "This is good news"
> }
file
My file exists
-rw-r--r-- 1 shaycohen staff 4 Jul 22 13:00 file
This is good news
22. Shell Scripts - Conditions
• Another way of using conditions is by using the ‘if’ command
if expression
then
command
else
command
fi
# if ls nosuchfile
> then
> echo "My file exists"
> else
> echo "My file does not exist"
> fi
ls: nosuchfile: No such file or directory
My file does not exist
# if ls nosuchfile
> then
> echo "My file exists"
> else
> echo "My file does not exist"
> fi
ls: nosuchfile: No such file or directory
My file does not exist
23. Shell Scripts - Conditions
• There are three main types of expressions
– Logical Expression “[[ expression ]]” expression can be any of the valid
flags of the “test” command
– Arithmetic Expression “(( expression ))” expression can be any valid
flags of the “expr” command
Use manual pages to find how to use the ‘expr’ and ‘test’ commands.
Q: What will the following command return as output
# expr 1+1
24. Shell Scripts - Loops
• Bash supports three main types of loops
– List loop “for”
– Conditional loop “while” | “until”
for VAR in “value1” “value2” …
do
command $VAR
done
while expression
do
command
done
25. Shell Scripts - Loops
# ls
dir1 file1 file2
# for FILE in file1 file2
> do
> ls $FILE && echo "$FILE Exists"
> done
# ls
dir1 file1 file2
# for FILE in file1 file2
> do
> ls $FILE && echo "$FILE Exists"
> done
26. Command line parsing
• Before running given commands, Bash parse the given
command and arguments and replaces any meta-characters
with the evaluated value.
• Use Bash with the ‘-x’ flag to get detailed information about
every command parsing result
# for FILE in $(ls)
> do
> echo "$((COUNT++)) - $FILE"
> done
0 - dir1
1 - file1
2 - file2
# for FILE in $(ls)
> do
> echo "$((COUNT++)) - $FILE"
> done
0 - dir1
1 - file1
2 - file2
# for FILE in dir1 file1 file2# for FILE in dir1 file1 file2
> echo “0 - dir1”> echo “0 - dir1”
> echo “1 - file1”> echo “1 - file1”
> echo “2 - file2”> echo “2 - file2”
27. Exercise
• Write your first shell script: hello.sh
The script should print out “Hello World” to the terminal
Editor's Notes
#2: Discussion: The importance of command line to communicate with a computer. - Why is it important ? - Learning the language is the initial step in the way of becoming a computing specialist - Linux, Windows, Mac, Cellphones - vast market