A brief overview of caching mechanisms in a web application. Taking a look at the different layers of caching and how to utilize them in a PHP code base. We also compare Redis and MemCached discussing their advantages and disadvantages.
Adaptive Query Execution: Speeding Up Spark SQL at RuntimeDatabricks
Over the years, there has been extensive and continuous effort on improving Spark SQL’s query optimizer and planner, in order to generate high quality query execution plans. One of the biggest improvements is the cost-based optimization framework that collects and leverages a variety of data statistics (e.g., row count, number of distinct values, NULL values, max/min values, etc.) to help Spark make better decisions in picking the most optimal query plan.
Node management in Oracle Clusterware involves monitoring nodes and evicting nodes if necessary to prevent split-brain situations. The CSSD process monitors nodes through network heartbeats over the private interconnect and disk heartbeats using the voting disks. If a node fails to respond within the configured time limits for either heartbeat, it will be evicted from the cluster. Eviction involves sending a "kill request" to the node over the remaining communication channels to forcibly remove it. With Oracle Clusterware 11.2.0.2, reboots of nodes can be avoided by gracefully shutting down the Oracle Clusterware stack instead of an immediate reboot when fencing a node.
The Rise of ZStandard: Apache Spark/Parquet/ORC/AvroDatabricks
Zstandard is a fast compression algorithm which you can use in Apache Spark in various way. In this talk, I briefly summarized the evolution history of Apache Spark in this area and four main use cases and the benefits and the next steps:
1) ZStandard can optimize Spark local disk IO by compressing shuffle files significantly. This is very useful in K8s environments. It’s beneficial not only when you use `emptyDir` with `memory` medium, but also it maximizes OS cache benefit when you use shared SSDs or container local storage. In Spark 3.2, SPARK-34390 takes advantage of ZStandard buffer pool feature and its performance gain is impressive, too.
2) Event log compression is another area to save your storage cost on the cloud storage like S3 and to improve the usability. SPARK-34503 officially switched the default event log compression codec from LZ4 to Zstandard.
3) Zstandard data file compression can give you more benefits when you use ORC/Parquet files as your input and output. Apache ORC 1.6 supports Zstandardalready and Apache Spark enables it via SPARK-33978. The upcoming Parquet 1.12 will support Zstandard compression.
4) Last, but not least, since Apache Spark 3.0, Zstandard is used to serialize/deserialize MapStatus data instead of Gzip.
There are more community works to utilize Zstandard to improve Spark. For example, Apache Avro community also supports Zstandard and SPARK-34479 aims to support Zstandard in Spark’s avro file format in Spark 3.2.0.
Spark supports four cluster managers: Local, Standalone, YARN, and Mesos. YARN is highly recommended for production use. When running Spark on YARN, careful tuning of configuration settings like the number of executors, executor memory and cores, and dynamic allocation is important to optimize performance and resource utilization. Configuring queues also allows separating different applications by priority and resource needs.
This Hadoop will help you understand the different tools present in the Hadoop ecosystem. This Hadoop video will take you through an overview of the important tools of Hadoop ecosystem which include Hadoop HDFS, Hadoop Pig, Hadoop Yarn, Hadoop Hive, Apache Spark, Mahout, Apache Kafka, Storm, Sqoop, Apache Ranger, Oozie and also discuss the architecture of these tools. It will cover the different tasks of Hadoop such as data storage, data processing, cluster resource management, data ingestion, machine learning, streaming and more. Now, let us get started and understand each of these tools in detail.
Below topics are explained in this Hadoop ecosystem presentation:
1. What is Hadoop ecosystem?
1. Pig (Scripting)
2. Hive (SQL queries)
3. Apache Spark (Real-time data analysis)
4. Mahout (Machine learning)
5. Apache Ambari (Management and monitoring)
6. Kafka & Storm
7. Apache Ranger & Apache Knox (Security)
8. Oozie (Workflow system)
9. Hadoop MapReduce (Data processing)
10. Hadoop Yarn (Cluster resource management)
11. Hadoop HDFS (Data storage)
12. Sqoop & Flume (Data collection and ingestion)
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Learn Spark SQL, creating, transforming, and querying Data frames
14. Understand the common use-cases of Spark and the various interactive algorithms
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training.
Redis is an in-memory key-value store that can be used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, sorted sets, with commands to add, remove, and get values. Redis works with an optional disk storage for persistence and supports master-slave replication for high availability. Common use cases include caching, queues, user sessions, and real-time analytics.
Clone Oracle Databases In Minutes Without Risk Using Enterprise Manager 13cAlfredo Krieg
1) Oracle Enterprise Manager allows users to clone Oracle databases in minutes without risk by using its snap clone functionality.
2) Snap clones provide rapid, space efficient cloning of databases across storage systems. They also enable integrated database lifecycle management.
3) Enterprise Manager provides both administrator-driven and self-service user workflows for creating snap clones of databases for testing and development.
This document summarizes a presentation about optimizing HBase performance through caching. It discusses how baseline tests showed low cache hit rates and CPU/memory utilization. Reducing the table block size improved cache hits but increased overhead. Adding an off-heap bucket cache to store table data minimized JVM garbage collection latency spikes and improved memory utilization by caching frequently accessed data outside the Java heap. Configuration parameters for the bucket cache are also outlined.
The document discusses using Hazelcast distributed locks to synchronize access to critical sections of code across multiple JVMs and application instances. It describes how Hazelcast implements distributed versions of common Java data structures, including distributed locks via its ILock interface. It provides examples of configuring a Hazelcast cluster programmatically by specifying cluster properties like IP addresses and ports, and shows how to obtain and use a distributed lock within a try-finally block to ensure it is released.
Cassandra by example - the path of read and write requestsgrro
This article describes how Cassandra handles and processes requests. It will help you to get a better impression about Cassandra's internals and architecture. The path of a single read request as well as the path of a single write request will be described in detail.
This is a presentation deck for Data+AI Summit 2021 at
https://databricks.com/session_na21/enabling-vectorized-engine-in-apache-spark
Apache Knox setup and hive and hdfs Access using KNOXAbhishek Mallick
There are two ways to set up Apache Knox on a server: using Ambari or manually. The document then provides steps for configuring Knox using Ambari, including entering a master secret password and restarting services. It also provides commands for testing HDFS and Hive access through Knox by curling endpoints or using Beeline.
Introducing Confluent labs Parallel Consumer client | Anthony Stubbes, ConfluentHostedbyConfluent
Consuming messages in parallel is what Apache Kafka® is all about, so you may well wonder, why would we want anything else? It turns out that, in practice, there are a number of situations where Kafka’s partition-level parallelism gets in the way of optimal design.
This session will go over some of these types of situations that can benefit from parallel message processing within a single application instance (aka slow consumers or competing consumers), and then introduce the new Parallel Consumer labs project from Confluent, which can improve functionality and massively improve performance in such situations.
It will cover the
- Different ordering modes of the client
- Relative performance improvements
- Usage with other components like Kafka Streams
- An introduction to the internal architecture of the project
- How it can achieve all this in a reassignment friendly manner
The document discusses Apache HBase replication, which asynchronously copies data between HBase clusters. It uses a push-based architecture shipping write-ahead log (WAL) entries similarly to MySQL replication. Replication provides eventual consistency and preserves the atomicity of individual updates. Administrators can configure replication by setting parameters and managing peer clusters and queues stored in Zookeeper. Replicated edits flow from the replication source on a region server to the remote replication sink where they are applied.
On the Road to DSpace 7: Angular UI + RESTTim Donohue
Updates on the DSpace 7 efforts, including status of Angular UI development and new REST API. This presentation was given at the Open Repositories 2017 conference on Wednesday, June 28, 2017 in Brisbane, Australia.
This document provides an overview and best practices for operating HBase clusters. It discusses HBase and Hadoop architecture, how to set up an HBase cluster including Zookeeper and region servers, high availability considerations, scaling the cluster, backup and restore processes, and operational best practices around hardware, disks, OS, automation, load balancing, upgrades, monitoring and alerting. It also includes a case study of a 110 node HBase cluster.
This is the presentation I have in OS course. Mainly focus on the linux file system part and only points out the difference about the Windows file system of NTFS, but I have not dig into it.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
FASTER is a log-structured storage system that provides high performance, scalability and durability. It uses a group commit technique called Concurrent Prefix Recovery (CPR) that allows threads to commit in parallel without blocking each other. CPR chooses commit points for each thread to enable non-blocking commits. FASTER provides its source code on GitHub for users to deploy a high performance durable storage system.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
OVERVIEW OF FACEBOOK SCALABLE ARCHITECTURE.Rishikese MR
The document provides an overview of Facebook's scalable architecture presented by Sharath Basil Kurian. It discusses how Facebook uses a variety of technologies like LAMP stack, PHP, Memcached, HipHop, Haystack, Scribe, Thrift, Hadoop and Hive to handle large amounts of user data and scale to support its massive user base. The architecture includes front-end components like PHP and BigPipe to dynamically render pages and back-end databases and caches like MySQL, Memcached and Haystack to efficiently store and retrieve user data.
The document discusses Apache Spark, an open source cluster computing framework for real-time data processing. It notes that Spark is up to 100 times faster than Hadoop for in-memory processing and 10 times faster on disk. The main feature of Spark is its in-memory cluster computing capability, which increases processing speeds. Spark runs on a driver-executor model and uses resilient distributed datasets and directed acyclic graphs to process data in parallel across a cluster.
Sizing Splunk SmartStore - Spend Less and Get More Out of SplunkPaula Koziol
Data is growing exponentially; however IT budgets are not. Growth in internal use cases and additional data sources can put organizations under intense pressure to manage spiraling costs. The good news is that help is on the way. We will show how to size and configure Splunk SmartStore to yield significant cost savings, for both current and future data growth. In addition, learn how to configure the Splunk deployment for optimal search performance.
Originally presented at Splunk .conf19 on October 22, 2019
The document provides instructions for installing DSpace on Debian Squeeze. It describes creating a dspace user, installing prerequisite software like Java, Tomcat, Maven and PostgreSQL. It then guides setting up the DSpace database and downloading, configuring and building DSpace. Issues with downloading dependencies due to proxy settings are addressed. The instructions conclude with configuring the web container, creating an administrator and accessing DSpace.
Training on DSpace Institutional Repository
Organized by
BALID Institute of Information Management (BIIM
DSpace Manual for BALID Trainee
Institutional Repository
1-2 May 2014
Venue: CIRDAP
• Installation of DSpace on Debian
• Configuration of DSpace
• Customization of Dspace
• Cron Jobs setup for production system
• MTA Setup for DSpace
• Some Important Commands of PostgreSQL
• DSpace Discovery Setup
Prepared By
Nur Ahammad
Junior Assistant Librarian
Independent University, Bangladesh
This document summarizes a presentation about optimizing HBase performance through caching. It discusses how baseline tests showed low cache hit rates and CPU/memory utilization. Reducing the table block size improved cache hits but increased overhead. Adding an off-heap bucket cache to store table data minimized JVM garbage collection latency spikes and improved memory utilization by caching frequently accessed data outside the Java heap. Configuration parameters for the bucket cache are also outlined.
The document discusses using Hazelcast distributed locks to synchronize access to critical sections of code across multiple JVMs and application instances. It describes how Hazelcast implements distributed versions of common Java data structures, including distributed locks via its ILock interface. It provides examples of configuring a Hazelcast cluster programmatically by specifying cluster properties like IP addresses and ports, and shows how to obtain and use a distributed lock within a try-finally block to ensure it is released.
Cassandra by example - the path of read and write requestsgrro
This article describes how Cassandra handles and processes requests. It will help you to get a better impression about Cassandra's internals and architecture. The path of a single read request as well as the path of a single write request will be described in detail.
This is a presentation deck for Data+AI Summit 2021 at
https://databricks.com/session_na21/enabling-vectorized-engine-in-apache-spark
Apache Knox setup and hive and hdfs Access using KNOXAbhishek Mallick
There are two ways to set up Apache Knox on a server: using Ambari or manually. The document then provides steps for configuring Knox using Ambari, including entering a master secret password and restarting services. It also provides commands for testing HDFS and Hive access through Knox by curling endpoints or using Beeline.
Introducing Confluent labs Parallel Consumer client | Anthony Stubbes, ConfluentHostedbyConfluent
Consuming messages in parallel is what Apache Kafka® is all about, so you may well wonder, why would we want anything else? It turns out that, in practice, there are a number of situations where Kafka’s partition-level parallelism gets in the way of optimal design.
This session will go over some of these types of situations that can benefit from parallel message processing within a single application instance (aka slow consumers or competing consumers), and then introduce the new Parallel Consumer labs project from Confluent, which can improve functionality and massively improve performance in such situations.
It will cover the
- Different ordering modes of the client
- Relative performance improvements
- Usage with other components like Kafka Streams
- An introduction to the internal architecture of the project
- How it can achieve all this in a reassignment friendly manner
The document discusses Apache HBase replication, which asynchronously copies data between HBase clusters. It uses a push-based architecture shipping write-ahead log (WAL) entries similarly to MySQL replication. Replication provides eventual consistency and preserves the atomicity of individual updates. Administrators can configure replication by setting parameters and managing peer clusters and queues stored in Zookeeper. Replicated edits flow from the replication source on a region server to the remote replication sink where they are applied.
On the Road to DSpace 7: Angular UI + RESTTim Donohue
Updates on the DSpace 7 efforts, including status of Angular UI development and new REST API. This presentation was given at the Open Repositories 2017 conference on Wednesday, June 28, 2017 in Brisbane, Australia.
This document provides an overview and best practices for operating HBase clusters. It discusses HBase and Hadoop architecture, how to set up an HBase cluster including Zookeeper and region servers, high availability considerations, scaling the cluster, backup and restore processes, and operational best practices around hardware, disks, OS, automation, load balancing, upgrades, monitoring and alerting. It also includes a case study of a 110 node HBase cluster.
This is the presentation I have in OS course. Mainly focus on the linux file system part and only points out the difference about the Windows file system of NTFS, but I have not dig into it.
This document summarizes a presentation on Oracle RAC (Real Application Clusters) internals with a focus on Cache Fusion. The presentation covers:
1. An overview of Cache Fusion and how it allows data to be shared across instances to enable scalability.
2. Dynamic re-mastering which adjusts where data is mastered based on access patterns to reduce messaging.
3. Techniques for handling contention including partitioning, connection pools, and separating redo logs.
4. Benefits of combining Oracle Multitenant and RAC such as aligning PDBs to instances.
5. How Oracle In-Memory Column Store fully integrates with RAC including fault tolerance features.
FASTER is a log-structured storage system that provides high performance, scalability and durability. It uses a group commit technique called Concurrent Prefix Recovery (CPR) that allows threads to commit in parallel without blocking each other. CPR chooses commit points for each thread to enable non-blocking commits. FASTER provides its source code on GitHub for users to deploy a high performance durable storage system.
This session covers how to work with PySpark interface to develop Spark applications. From loading, ingesting, and applying transformation on the data. The session covers how to work with different data sources of data, apply transformation, python best practices in developing Spark Apps. The demo covers integrating Apache Spark apps, In memory processing capabilities, working with notebooks, and integrating analytics tools into Spark Applications.
OVERVIEW OF FACEBOOK SCALABLE ARCHITECTURE.Rishikese MR
The document provides an overview of Facebook's scalable architecture presented by Sharath Basil Kurian. It discusses how Facebook uses a variety of technologies like LAMP stack, PHP, Memcached, HipHop, Haystack, Scribe, Thrift, Hadoop and Hive to handle large amounts of user data and scale to support its massive user base. The architecture includes front-end components like PHP and BigPipe to dynamically render pages and back-end databases and caches like MySQL, Memcached and Haystack to efficiently store and retrieve user data.
The document discusses Apache Spark, an open source cluster computing framework for real-time data processing. It notes that Spark is up to 100 times faster than Hadoop for in-memory processing and 10 times faster on disk. The main feature of Spark is its in-memory cluster computing capability, which increases processing speeds. Spark runs on a driver-executor model and uses resilient distributed datasets and directed acyclic graphs to process data in parallel across a cluster.
Sizing Splunk SmartStore - Spend Less and Get More Out of SplunkPaula Koziol
Data is growing exponentially; however IT budgets are not. Growth in internal use cases and additional data sources can put organizations under intense pressure to manage spiraling costs. The good news is that help is on the way. We will show how to size and configure Splunk SmartStore to yield significant cost savings, for both current and future data growth. In addition, learn how to configure the Splunk deployment for optimal search performance.
Originally presented at Splunk .conf19 on October 22, 2019
The document provides instructions for installing DSpace on Debian Squeeze. It describes creating a dspace user, installing prerequisite software like Java, Tomcat, Maven and PostgreSQL. It then guides setting up the DSpace database and downloading, configuring and building DSpace. Issues with downloading dependencies due to proxy settings are addressed. The instructions conclude with configuring the web container, creating an administrator and accessing DSpace.
Training on DSpace Institutional Repository
Organized by
BALID Institute of Information Management (BIIM
DSpace Manual for BALID Trainee
Institutional Repository
1-2 May 2014
Venue: CIRDAP
• Installation of DSpace on Debian
• Configuration of DSpace
• Customization of Dspace
• Cron Jobs setup for production system
• MTA Setup for DSpace
• Some Important Commands of PostgreSQL
• DSpace Discovery Setup
Prepared By
Nur Ahammad
Junior Assistant Librarian
Independent University, Bangladesh
Dspace-1.8.2 Installation on Centos-6.3Nur Ahammad
Dspace-1.8.2 Installation on Centos-6.3
Nur Ahammad
Junior Assistant Librarin
Independent University, Bangladesh
I install DSpace on Centos for Dhaka University Library in October 2012. That time I prepare this Manual.
Dspace (ver5.1) software step by step installation for windows.
DSpace is an open source repository software package typically used for creating open access repositories for scholarly and/or published digital content. While DSpace shares some feature overlap with content management systems and document management systems, the DSpace repository software serves a specific need as a digital archives system, focused on the long-term storage, access and preservation of digital content and widely used for buildup IR's.
This document provides instructions for setting up an Apache Hadoop cluster on Macintosh OSX. It describes installing and configuring Java, Hadoop, Hive, and MySQL on a "namenode" machine and multiple "datanode" machines. Key steps include installing software via Homebrew, configuring host files and SSH keys for passwordless login, creating configuration files for core Hadoop components and copying them to all datanodes, and installing scripts to help manage the cluster. The goal is to have a basic functioning Hadoop cluster on Mac OSX for testing and proof of concept purposes.
Training on Koha Integrated Library System (ILS)
Organized by BALID
3-7 September 2013
Installation of Koha on Debian
Post Installation of Koha
OPAC Customization
Some Important Commands of Mysql
Prepared By
Nur Ahammad
Junior Assistant Librarian
Independent University, Bangladesh
This document provides instructions on installing and configuring the LAMP stack on Linux. It discusses downloading and installing Linux, Apache, MySQL, and PHP. It explains how to partition disks for installation, set up virtual hosts, and configure Apache's configuration files and ports. The key steps are downloading Linux distributions, burning ISO images, partitioning disks, selecting packages during installation, configuring Apache's files, ports, and virtual hosts.
This document contains sample questions and answers for the RedHat EX200 certification exam. It includes 24 multiple choice questions that cover topics like configuring the hostname, IP address, users and groups, permissions, filesystems, storage, services and more. For each question, it provides the question text and one or more possible correct answers to choose from. The goal of the exam is to test knowledge of administering Red Hat Enterprise Linux systems.
The document provides step-by-step instructions for installing a single-node Hadoop cluster on Ubuntu Linux using VMware. It details downloading and configuring required software like Java, SSH, and Hadoop. Configuration files are edited to set properties for core Hadoop functions and enable HDFS. Finally, sample data is copied to HDFS and a word count MapReduce job is run to test the installation.
Automatic systems installations and change management wit FAI - Talk for Netw...Henning Sprang
How long does it take you, to recover an arbitrary server, or duplicate an arbitrary running configuration to a new system? Especially in the latter case without a full-backup, which would contain a wrong IP Address, Hostname and other things and would therefore eventually break some things - and are storage-exhaustive.
Get into FAI - Fully Automatic Installation.
FAI http://www.informatik.uni-koeln.de/fai/) is a framework for completely automated installations - via LAN, CD or USB stick, as well as configuration management for running systems. The concept "Plan your installation, and FAi installs your plan" supports, but also requires building a well planned and documented infrastructure. Configuration properties can be defined into the smallest possible Detail, and then be arbitrarily combined - a great advantage in environments with many different system types, which at the same time share one or multiple common bases and settings. FAI makes it possible to install and change many different systems at the same time.
In addition to all these things, with the grml-live software, FAI can even be used to build live cd's/usb sticks. This talk will give an overview of the functionality and possibilities of FAI, including a comparison with the also renowned software for similar, but not completely the same tasks, Puppet - which can even be integrated into FAI.
This document provides instructions for installing DSpace on Windows XP. It describes downloading and installing prerequisite software like Java, PostgreSQL, Apache Ant and Maven. It then explains how to compile and install DSpace by running Maven and Ant commands. Finally, it describes how to access DSpace after installing it and configuring Tomcat.
Advanced Level Training on Koha / TLS (ToT)Ata Rehman
Advanced Level Training on Koha / Total Library Solution - TLS - (ToT), December 4-8, 2017 – PASTIC, Islamabad
All training material provided during this training can be found at: https://drive.google.com/drive/folders/1hwWGHV1iHgcpjK_tw6-Xgf-ZVUPchIS_
The document provides instructions for installing and configuring various Linux server applications and services, including Jabberd, Sendmail, Qpopper, Squirrelmail, Samba, and others. It describes downloading and extracting source files, editing configuration files, and commands to compile, install, and start the servers. The instructions are provided in a step-by-step format intended for novice Linux system administrators.
Hadoop installation on windows using virtual box and also hadoop installation on ubuntu
http://logicallearn2.blogspot.in/2018/01/hadoop-installation-on-ubuntu.html
Nagios Conference 2014 - Mike Weber - Expanding NRDS Capabilities on Linux Sy...Nagios
Mike Weber's presentation on Expanding NRDS Capabilities on Linux Systems.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
ERP System Implementation Kubernetes Cluster with Sticky Sessions Chanaka Lasantha
ERP System Implementation on Kubernetes Cluster with Sticky Sessions:
01. Security Features Enabled in Kubernetes Cluster.
02. SNMP, Syslog and audit logs enabled.
03. Enabled ERP no login service user.
04. Auto-scaling enabled both ESB and Jboss Pods.
05. Reduced power consumption using the scale in future during off-peak days.
06. NFS enables s usual with ERP service user.
07. External Ingress( Load Balance enabled).
08. Cluster load balancer enabled by default.
09. SSH enabled via both putty.exe and Kubernetes management console.
10. Network Monitoring enabled on Kubernetes dashboard.
11. Isolated Private and external network ranges to protect backend servers (pods).
12. OS of the pos is updated with the latest kernel version.
13. Core Linux OS will reduce security threats.
14. Lightweight OS over small HDD space
15. Less amount of RAM usage has been enabled.
16. AWS ready.
17. Possible for exporting into Public cloud ENV.
18. L7 and L4 Heavy Load Balancing Enabled.
19. Snapshot Versioning Control Enabled.
20. Many More ………etc.
The document discusses how to install, configure, and uninstall the Apache web server on Linux systems. It provides instructions for installing Apache using packages or compiling from source, editing configuration files to set up the server, and different methods for uninstalling Apache including using package managers or manually deleting files. The document also covers Apache configuration directives for the Prefork and Worker MPM modules and gives an overview of Apache filters and how to use them to manipulate HTTP request and response data.
These are the slides from a presentation I gave in 1999 at the Seattle Area System Administrators Guild monthly meeting. I haven't done this in a while, so I can't say how much of this is no longer valid, but it may prove useful to someone as a reference.
This document provides information about installing and configuring Linux, Apache web server, PostgreSQL database, and Apache Tomcat on a Linux system. It discusses installing Ubuntu using VirtualBox, creating users and groups, setting file permissions, important Linux files and directories. It also covers configuring Apache server and Tomcat, installing and configuring PostgreSQL, and some self-study questions about the Linux boot process, run levels, finding the kernel version and learning about NIS, NFS, and RPM package management.
The document provides instructions on installing Linux, including collecting hardware information before installing, preparing disk partitions, installing from a CD-ROM, and basic package management tools for installing, upgrading, and removing software.
"Basics of Heterocyclic Compounds and Their Naming Rules"rupalinirmalbpharm
This video is about heterocyclic compounds, which are chemical compounds with rings that include atoms like nitrogen, oxygen, or sulfur along with carbon. It covers:
Introduction – What heterocyclic compounds are.
Prefix for heteroatom – How to name the different non-carbon atoms in the ring.
Suffix for heterocyclic compounds – How to finish the name depending on the ring size and type.
Nomenclature rules – Simple rules for naming these compounds the right way.
Common rings – Examples of popular heterocyclic compounds used in real life.
Understanding P–N Junction Semiconductors: A Beginner’s GuideGS Virdi
Dive into the fundamentals of P–N junctions, the heart of every diode and semiconductor device. In this concise presentation, Dr. G.S. Virdi (Former Chief Scientist, CSIR-CEERI Pilani) covers:
What Is a P–N Junction? Learn how P-type and N-type materials join to create a diode.
Depletion Region & Biasing: See how forward and reverse bias shape the voltage–current behavior.
V–I Characteristics: Understand the curve that defines diode operation.
Real-World Uses: Discover common applications in rectifiers, signal clipping, and more.
Ideal for electronics students, hobbyists, and engineers seeking a clear, practical introduction to P–N junction semiconductors.
Contact Lens:::: An Overview.pptx.: OptometryMushahidRaza8
A comprehensive guide for Optometry students: understanding in easy launguage of contact lens.
Don't forget to like,share and comments if you found it useful!.
Link your Lead Opportunities into Spreadsheet using odoo CRMCeline George
In Odoo 17 CRM, linking leads and opportunities to a spreadsheet can be done by exporting data or using Odoo’s built-in spreadsheet integration. To export, navigate to the CRM app, filter and select the relevant records, and then export the data in formats like CSV or XLSX, which can be opened in external spreadsheet tools such as Excel or Google Sheets.
CBSE - Grade 8 - Science - Chemistry - Metals and Non Metals - WorksheetSritoma Majumder
Introduction
All the materials around us are made up of elements. These elements can be broadly divided into two major groups:
Metals
Non-Metals
Each group has its own unique physical and chemical properties. Let's understand them one by one.
Physical Properties
1. Appearance
Metals: Shiny (lustrous). Example: gold, silver, copper.
Non-metals: Dull appearance (except iodine, which is shiny).
2. Hardness
Metals: Generally hard. Example: iron.
Non-metals: Usually soft (except diamond, a form of carbon, which is very hard).
3. State
Metals: Mostly solids at room temperature (except mercury, which is a liquid).
Non-metals: Can be solids, liquids, or gases. Example: oxygen (gas), bromine (liquid), sulphur (solid).
4. Malleability
Metals: Can be hammered into thin sheets (malleable).
Non-metals: Not malleable. They break when hammered (brittle).
5. Ductility
Metals: Can be drawn into wires (ductile).
Non-metals: Not ductile.
6. Conductivity
Metals: Good conductors of heat and electricity.
Non-metals: Poor conductors (except graphite, which is a good conductor).
7. Sonorous Nature
Metals: Produce a ringing sound when struck.
Non-metals: Do not produce sound.
Chemical Properties
1. Reaction with Oxygen
Metals react with oxygen to form metal oxides.
These metal oxides are usually basic.
Non-metals react with oxygen to form non-metallic oxides.
These oxides are usually acidic.
2. Reaction with Water
Metals:
Some react vigorously (e.g., sodium).
Some react slowly (e.g., iron).
Some do not react at all (e.g., gold, silver).
Non-metals: Generally do not react with water.
3. Reaction with Acids
Metals react with acids to produce salt and hydrogen gas.
Non-metals: Do not react with acids.
4. Reaction with Bases
Some non-metals react with bases to form salts, but this is rare.
Metals generally do not react with bases directly (except amphoteric metals like aluminum and zinc).
Displacement Reaction
More reactive metals can displace less reactive metals from their salt solutions.
Uses of Metals
Iron: Making machines, tools, and buildings.
Aluminum: Used in aircraft, utensils.
Copper: Electrical wires.
Gold and Silver: Jewelry.
Zinc: Coating iron to prevent rusting (galvanization).
Uses of Non-Metals
Oxygen: Breathing.
Nitrogen: Fertilizers.
Chlorine: Water purification.
Carbon: Fuel (coal), steel-making (coke).
Iodine: Medicines.
Alloys
An alloy is a mixture of metals or a metal with a non-metal.
Alloys have improved properties like strength, resistance to rusting.
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schoolsdogden2
Algebra 1 is often described as a “gateway” class, a pivotal moment that can shape the rest of a student’s K–12 education. Early access is key: successfully completing Algebra 1 in middle school allows students to complete advanced math and science coursework in high school, which research shows lead to higher wages and lower rates of unemployment in adulthood.
Learn how The Atlanta Public Schools is using their data to create a more equitable enrollment in middle school Algebra classes.
The *nervous system of insects* is a complex network of nerve cells (neurons) and supporting cells that process and transmit information. Here's an overview:
Structure
1. *Brain*: The insect brain is a complex structure that processes sensory information, controls behavior, and integrates information.
2. *Ventral nerve cord*: A chain of ganglia (nerve clusters) that runs along the insect's body, controlling movement and sensory processing.
3. *Peripheral nervous system*: Nerves that connect the central nervous system to sensory organs and muscles.
Functions
1. *Sensory processing*: Insects can detect and respond to various stimuli, such as light, sound, touch, taste, and smell.
2. *Motor control*: The nervous system controls movement, including walking, flying, and feeding.
3. *Behavioral responThe *nervous system of insects* is a complex network of nerve cells (neurons) and supporting cells that process and transmit information. Here's an overview:
Structure
1. *Brain*: The insect brain is a complex structure that processes sensory information, controls behavior, and integrates information.
2. *Ventral nerve cord*: A chain of ganglia (nerve clusters) that runs along the insect's body, controlling movement and sensory processing.
3. *Peripheral nervous system*: Nerves that connect the central nervous system to sensory organs and muscles.
Functions
1. *Sensory processing*: Insects can detect and respond to various stimuli, such as light, sound, touch, taste, and smell.
2. *Motor control*: The nervous system controls movement, including walking, flying, and feeding.
3. *Behavioral responses*: Insects can exhibit complex behaviors, such as mating, foraging, and social interactions.
Characteristics
1. *Decentralized*: Insect nervous systems have some autonomy in different body parts.
2. *Specialized*: Different parts of the nervous system are specialized for specific functions.
3. *Efficient*: Insect nervous systems are highly efficient, allowing for rapid processing and response to stimuli.
The insect nervous system is a remarkable example of evolutionary adaptation, enabling insects to thrive in diverse environments.
The insect nervous system is a remarkable example of evolutionary adaptation, enabling insects to thrive
APM event hosted by the Midlands Network on 30 April 2025.
Speaker: Sacha Hind, Senior Programme Manager, Network Rail
With fierce competition in today’s job market, candidates need a lot more than a good CV and interview skills to stand out from the crowd.
Based on her own experience of progressing to a senior project role and leading a team of 35 project professionals, Sacha shared not just how to land that dream role, but how to be successful in it and most importantly, how to enjoy it!
Sacha included her top tips for aspiring leaders – the things you really need to know but people rarely tell you!
We also celebrated our Midlands Regional Network Awards 2025, and presenting the award for Midlands Student of the Year 2025.
This session provided the opportunity for personal reflection on areas attendees are currently focussing on in order to be successful versus what really makes a difference.
Sacha answered some common questions about what it takes to thrive at a senior level in a fast-paced project environment: Do I need a degree? How do I balance work with family and life outside of work? How do I get leadership experience before I become a line manager?
The session was full of practical takeaways and the audience also had the opportunity to get their questions answered on the evening with a live Q&A session.
Attendees hopefully came away feeling more confident, motivated and empowered to progress their careers
A measles outbreak originating in West Texas has been linked to confirmed cases in New Mexico, with additional cases reported in Oklahoma and Kansas. The current case count is 817 from Texas, New Mexico, Oklahoma, and Kansas. 97 individuals have required hospitalization, and 3 deaths, 2 children in Texas and one adult in New Mexico. These fatalities mark the first measles-related deaths in the United States since 2015 and the first pediatric measles death since 2003.
The YSPH Virtual Medical Operations Center Briefs (VMOC) were created as a service-learning project by faculty and graduate students at the Yale School of Public Health in response to the 2010 Haiti Earthquake. Each year, the VMOC Briefs are produced by students enrolled in Environmental Health Science Course 581 - Public Health Emergencies: Disaster Planning and Response. These briefs compile diverse information sources – including status reports, maps, news articles, and web content– into a single, easily digestible document that can be widely shared and used interactively. Key features of this report include:
- Comprehensive Overview: Provides situation updates, maps, relevant news, and web resources.
- Accessibility: Designed for easy reading, wide distribution, and interactive use.
- Collaboration: The “unlocked" format enables other responders to share, copy, and adapt seamlessly. The students learn by doing, quickly discovering how and where to find critical information and presenting it in an easily understood manner.
CURRENT CASE COUNT: 817 (As of 05/3/2025)
• Texas: 688 (+20)(62% of these cases are in Gaines County).
• New Mexico: 67 (+1 )(92.4% of the cases are from Eddy County)
• Oklahoma: 16 (+1)
• Kansas: 46 (32% of the cases are from Gray County)
HOSPITALIZATIONS: 97 (+2)
• Texas: 89 (+2) - This is 13.02% of all TX cases.
• New Mexico: 7 - This is 10.6% of all NM cases.
• Kansas: 1 - This is 2.7% of all KS cases.
DEATHS: 3
• Texas: 2 – This is 0.31% of all cases
• New Mexico: 1 – This is 1.54% of all cases
US NATIONAL CASE COUNT: 967 (Confirmed and suspected):
INTERNATIONAL SPREAD (As of 4/2/2025)
• Mexico – 865 (+58)
‒Chihuahua, Mexico: 844 (+58) cases, 3 hospitalizations, 1 fatality
• Canada: 1531 (+270) (This reflects Ontario's Outbreak, which began 11/24)
‒Ontario, Canada – 1243 (+223) cases, 84 hospitalizations.
• Europe: 6,814
World war-1(Causes & impacts at a glance) PPT by Simanchala Sarab(BABed,sem-4...larencebapu132
This is short and accurate description of World war-1 (1914-18)
It can give you the perfect factual conceptual clarity on the great war
Regards Simanchala Sarab
Student of BABed(ITEP, Secondary stage)in History at Guru Nanak Dev University Amritsar Punjab 🙏🙏
Odoo Inventory Rules and Routes v17 - Odoo SlidesCeline George
Odoo's inventory management system is highly flexible and powerful, allowing businesses to efficiently manage their stock operations through the use of Rules and Routes.
Title: A Quick and Illustrated Guide to APA Style Referencing (7th Edition)
This visual and beginner-friendly guide simplifies the APA referencing style (7th edition) for academic writing. Designed especially for commerce students and research beginners, it includes:
✅ Real examples from original research papers
✅ Color-coded diagrams for clarity
✅ Key rules for in-text citation and reference list formatting
✅ Free citation tools like Mendeley & Zotero explained
Whether you're writing a college assignment, dissertation, or academic article, this guide will help you cite your sources correctly, confidently, and consistent.
Created by: Prof. Ishika Ghosh,
Faculty.
📩 For queries or feedback: [email protected]
*Metamorphosis* is a biological process where an animal undergoes a dramatic transformation from a juvenile or larval stage to a adult stage, often involving significant changes in form and structure. This process is commonly seen in insects, amphibians, and some other animals.
How to Manage Opening & Closing Controls in Odoo 17 POSCeline George
In Odoo 17 Point of Sale, the opening and closing controls are key for cash management. At the start of a shift, cashiers log in and enter the starting cash amount, marking the beginning of financial tracking. Throughout the shift, every transaction is recorded, creating an audit trail.
2. Required Software Operating System: Linux: Redhat/Fedora, Suse,Mandrake, Debian etc. Unix: Solaris (Sun system) , HP-UX (Hewlett Packard) , AIX (IBM), Mac OS X Mail server (Sendmail/postfix) RDBMS (postgres/oracle) DSpace Java compiler (jdk) Java Servlet Container Ant Java build tool, similar to make in the world of ‘C’ Complies java programs of dspace source code, generates war files
3. Role of RDBMS Database backend (postgres/oracle) of DSpace, stores information on: Communities Collection Members - passwords E-groups etc.
4. Step1: Linux Installation Strongly advise to Load Linux fully unless you are a Linux Guru Make sure the following are installed Mail server Copy all the files provided on CD-ROM tar.gz files in /dspace directory Or Download the following (or latest) files from Internet jdk1.5.0_02.tar.gz (java compiler) apache-ant-1.7.0-bin.tar.gz (ant) postgresql-8.2.7.tar.gz (RDBMS) apache-tomcat-5.5.25.tar.gz (servlet container) postgresql-8.3-603.jdbc2.jar (jdbc driver for postgres) dspace-source-1.4.2.tar.gz (dspace software)
5. Step 2: Installation of Java Install Java 1.4 or later You need to login as Linux root user to install Use the command bellow to install #cd /dspace #tar –zxvf jdk1.5.0_02.tar.gz [to uncompress the file] #rm /usr/bin/java [remove the original java binary if any] #cd /usr/bin #ln -s /dspace/jdk1.5.0_02/bin/java java [create Symbolic link to newly installed java if any]
6. Step 2: Installation of Java (contd..) Define java home PATH by the commands: #JAVA_HOME=/dspace/jdk1.5.0_02 [ setting variable to point Java directory] #export JAVA_HOME To set the environment variable JAVA_HOME permanently (get set at the time of system boot) do the following #vi /etc/profile (open /etc/profile file) Add the two lines bellow at the end of the file. JAVA_HOME=/dspace/jdk1.5.0_02 export JAVA_HOME Save the file (press ESC :wq), this will set the variable JAVA_HOME when system boots
7. Step 3:Configuring Mail Server You may use any of the following mail servers Sendmail Postfix Exim Note: Here sendmail is explained
8. Step 3: Sendmail configuration: Simple approach Case I:If your organization has a mail server Open /etc/mail/sendmail.mc Replace the line having dnl define(`SMART_HOST', `smtp.your.provider')dnl Remove dnl and enter your mail server name Ex: define(`SMART_HOST', `nsdl.niscair.res.in')dnl
9. Step 3: sendmail configuration contd.. Case II: If you want to use the same system as mail server Comment the following line DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl Ex: dnl # DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl
10. Step 3: sendmail configuration contd.. Save the file (press ESC :wq) Run #m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf #service sendmail restart OR #/etc/init.d/sendmail restart
11. Step 4: Apache-ant installation Check apache ant is already installed or not using the command: #which ant You need to login as Linux root user If not installed type the following commands to install #cd /dspace #tar -zxvf apache-ant-1.7.0-bin.tar (extract files)
12. Step 4: Apache-ant installation Define a path to the apache ant binary by the commands #PATH=$PATH:/dspace/apache-ant-1.7.0 /bin #export PATH To add apache-ant path in PATH variable permanently do the following Open the file /etc/profile and add the two line below towards the end of the file. #vi /etc/profile #PATH=$PATH:/dspace/apache-ant-1.7.0 /bin #export PATH Save the file (Press ESC :wq)
13. Step 5: PostgreSQL installation You need to become Linux root user to install postgresql Use the following commands to install #cd /dspace #tar -zxvf postgresql-8.2.7.tar.gz (extract files) #cd /dspace/postgresql-8.2.7 #./configure (it will install postgres in /usr/local/pgsql directory) #gmake #gmake install #useradd postgres [ create postgres user] #mkdir /usr/local/pgsql/data #chown -R postgres /usr/local/pgsql/data (change owner of the data directory to postgres) #su - postrgres
14. Step 5: PostgreSQL installation (contd..) Start posgres by doing the following $/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data $/usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data start Create a dspace database, owned by the dspace –a PostgreSQL user by doing the following $/usr/local/pgsql/bin/createuser -U postgres -d -A -P dspace [Enter a password for the DSpace database] $/usr/local/pgsql/bin/createdb -U dspace -E UNICODE dspace $vi /usr/local/pgsql/data/postgresql.conf uncomment the line starting: listen_addresses = 'localhost' (i.e. delete # at the beginning of line) $vi /usr/local/pgsql/data/pg_hba.conf add follwing line in the section # IPv4-style local connections host dspace dspace 127.0.0.1 255.255.255.255 md5 Logout from postgres user #exit
15. Step 6: Installation of Apache Tomcat You have to become root user and type the following commands #cd /dspace #tar -zxvf apache-tomcat-5.5.25.tar.gz [extract files] Set the environment variable JAVA_OPTS="-Xmx512M -Xms64M -Dfile.encoding=UTF-8" by doing the following #JAVA_OPTS="-Xmx512M -Xms64M -Dfile.encoding=UTF-8" #export JAVA_OPTS To make it permanent do the following: # vi /etc/profile and add the two line below towards the end of the file. JAVA_OPTS="-Xmx512M -Xms64M -Dfile.encoding=UTF-8" export JAVA_OPTS Save the file (type ESC :wq)
16. Step 6: Installation of Apache Tomcat # vi /dspace/apache-tomcat-5.5.25/conf/server.xml locate the following section <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> and add the line URIEncoding="UTF-8" in this section like <!-- Define a non-SSL HTTP/1.1 Connector on port 80 --> < Connector port="80" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" URIEncoding="UTF-8" /> Save the file (press ESC :wq)
17. Step 7: Installation of Dspace You have to login as Linux root user Create the DSpace Linux user by using the commands #groupadd dspace [create group] #useradd dspace –g dspace [ create dspace user] #chown -R dspace.dspace /dspace/apache-tomcat-5.5.25 [ change the owner and group of tomcat directory to dspace, so as to run as dspace user #cd /dspace #tar -zxvf dspace-source-1.4.2.tar.gz It creates a directory name like dspace-1.4.2-source Copy jdbc driver for postgresql to /dspace/dspace-1.4.2-source/lib directory. To do this perform the following: Copy a jdbc driver for postgresql i.e. postgresql-8.3-603.jdbc2.jar file to /dspace/dspace-1.4.2-source/lib #cp /dspace/ postgresql-8.3-603.jdbc2.jarr /dspace/dspace-1.4.2-source/lib #chown -R dspace.dspace /dspace/dspace-1.4.2-source [ change the owner and group of dspace directory to dspace]
18. Step 7: Installation of Dspace contd.. #su -l dspace $cd /dspace/dspace-1.4.2-source Open the file /dspace/dspace-1.4.2-source/config/dspace.cfg and set the following properties $vi /dspace/dspace-1.4.2-source/config/dspace.cfg dspace.url = [like http://192.168.3.203/dspace] dspace.hostname = [hotsname or IP address of server] dspace.name =[ dspace name like name of your Institution eg. NISCAIR Digital Library] db.password = [the password you entered in the last step of postgesql installation] mail.server =[hostname or IP address of server ex. mail.niscair.res.in] mail.from.address = [email address] feedback.recipient =[email address] mail.admin = [email address of admin] alert.recipient =[email address (not essential but very useful!)] Save the file $cd /dspace/dspace-1.4.2-source
19. Step 7: Installation of Dspace contd.. Compile and install DSpace by doing the following $/dspace/apache-ant-1.7.0 /bin/ant fresh_install $cp /dspace/dspace-1.4.2-source/build/*.war /dspace/apache-tomcat-5.5.25/webapps/dspace/ Define CLASSPATH for dspace classes by doing following $vi /dspace/dspace-1.4.2-source/bin/dsrun Append following lines at the end of file CLASSPATH=$CLASSPATH:/dspace/apache-tomcat-5.5.25/webapps/dspace/WEB-INF/classes FULLPATH=$CLASSPATH:$jJARS:$DSPACEDIR/config Create an initial administrator account by the command $/dspace/dspace-1.4.2-source/bin/create-administrator You need to provide some information like admin user name, email ID and so on
20. Step 7: Installation of Dspace contd.. Start tomcat by the command $/dspace/apache-tomcat-5.5.25 /bin/startup.sh Point your browser to the URL: http://HOSTNAME_OR_IP_ADDRESS_OF_SERVER/dspace Access admin UI by point your browser to the URL: http://HOSTNAME_OR_IP_ADDRESS_OF_SERVER/dspace/dspace-admin eg. http://192.168.3.203/dspace
21. Installation of Dspace- Summary Dspace is based on open source technlogy The installation process is some what complex for new users The following components should properly work : Postgresql server Jdbc driver for postgres server Apache tomcat Initially there may be few errors related to above components if not properly installed
22. Cron jobs To perform certain task periodically we may use cron jobs by typing following command: # crontab -e # Send out subscription e-mails at 01:00 every day 0 1 * * * /dspace/bin/sub-daily # Run the media filter at 02:00 every day 0 2 * * * /dspace/bin/filter-media # Generate full-text index at 2.15 an 15 2 * * * /dspace/bin/index-all # Clean up the database nightly at 2.40am 40 2 * * * vacuumdb --analyze dspace > /dev/null 2>&1" > /var/spool/cron/dspace * * * * * Minute 0-59 Hour 0-23 (0 = midnight) Day 1-31 Month 1-12 Weekday 0-6 (0 = Sunday)
23. Starting apache tomcat on boot To make your repository start at the boot time adds the following to /etc/rc.d/rc.local su –l dspace –c ‘ /dspace/apache-tomcat-5.5.25/bin/startup.sh
24. Trouble shooting.. Check your environment variable by giving the following commands echo $PATH echo $JAVA_HOME See whether java’s bin directory is in your PATH JAVA_HOME is pointing to Java directory Change your /dspace/.bash_profile
25. Trouble shooting: while fresh_install Mostly you get database related errors, the cause could be You did not copy jdbc drivers in dspace-source/lib directory Or changes in the postgresql .conf file were not made at all, or done improperly
26. Trouble shooting: Once you launch DSpace If you do not see dspace on the screen Tomcat was not launched Or the port (8080) was already in use You started tomcat second time Solution Kill tomcat if you have started second time (using ps –a | grep java or killall java Change to another port in tomcat/config/server.xml Check $TOMCAT-HOME/logs/catalina.out For specific problem identification
27. Troubleshooting: Internal System Error Most common error message To generic and is not specific The reasons could be many Check /dspace/log/dspace.log file, which may provide the specific problem
28. Trouble-shooting: Fails to sendmal Mail configuration is wrong You did not make mail server entry in /dspace/conf/dspace.cfg file DNS problem You do not have FQDN (Fully Qualified Domain Name) for you system It should hostname.domainname Ex: localhost.localdomain (not just localhost) nsdl.niscair.res.in ( not jst nsdl)