0% found this document useful (0 votes)
4 views

Mainframe Interview Questions

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Mainframe Interview Questions

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

To skip 100 records and then copy the next 100 records using JCL, you can use

a utility like DFSORT or IEBGENER.


Below is an example using DFSORT:

//COPY100 EXEC PGM=SORT

//SYSOUT DD SYSOUT=*

//SORTIN DD DSN=your.input.dataset ,DISP=SHR

//SORTOUT DD DSN=your.output.dataset ,

// DISP=(NEW,CATLG,DELETE),

// SPACE=(CYL,(1,1),RLSE),

// DCB=(LRECL=80,BLKSIZE=0,RECFM=FB )

//SYSIN DD *

OPTION COPY

OMIT COND=(1,4,CH,EQ,C' ') /* Modify condition or remove if not needed */

OUTFIL SKIPREC=100,STOPAFT=100

/*

*****************************************************************************************

To run only the 3rd step in a JCL job that contains multiple steps, you can use a combination of the COND parameter
on subsequent steps or the RESTART and COND parameters. However, the simplest approach is to use the RESTART
parameter on the JOB card itself.

Here is an example of how to modify the JCL to run only the 3rd step:

//JOBNAME JOB (ACCT),'NAME',RESTART=STEP03

//STEP01 EXEC PGM=PROG1

//...

//STEP03 EXEC PGM=PROG3

//...

//STEP10 EXEC PGM=PROG10

In this example:

• The RESTART=STEP03 parameter on the JOB card tells the system to begin execution starting from the step
named STEP03.

• There's no specific "utility" used for this purpose; it's a standard feature of JCL.

If you want to ensure that it only runs the 3rd step and then stops, you can add an additional step that ends the job:

//JOBNAME JOB (ACCT),'NAME’, RESTART=STEP03

//STEP01 EXEC PGM=PROG1

//...

//STEP03 EXEC PGM=PROG3

//STEP04 EXEC PGM=IEFBR14

//...

//STEP10 EXEC PGM=PROG10

In STEP04 after STEP03, use PGM=IEFBR14 and set a condition to end the job if STEP03 runs successfully:

//STEP04 EXEC PGM=IEFBR14, COND=(0, NE,STEP03)


This setup ensures that after STEP03 completes successfully, STEP04 (which does nothing and completes
immediately) will execute, and the JCL will stop because there are no conditions set for the subsequent steps to
bypass this. This is a straightforward way to control job processing in JCL.

*****************************************************************************************

To run a specific step in a JCL using IEBEDIT, you need to create a utility job that specifies which steps to execute from
the original job stream. IEBEDIT allows you to selectively execute one or more steps from a JCL job stream, making it
useful for testing or debugging specific parts of a larger job without running the entire sequence.

Here's an example of how to set up a JCL to use IEBEDIT to run only the third step of an existing job:

Sample JCL with IEBEDIT to Run Only the 3rd Step

//IEBEDITJ JOB (ACCT),'NAME'

//STEP1 EXEC PGM=IEBEDIT

//SYSUT1 DD DSN=your.original.job.dat aset,DISP=SHR

//SYSUT2 DD SYSOUT=(*)

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

EDIT TYPE=INCLUDE,STEPNAME=(STEP03)

/*

Explanation:

• JOB Statement: Replace your.original.job.dataset with the dataset name where the original JCL job is stored.

• STEP1: This is the step where you run the IEBEDIT program.

• SYSUT1: This DD statement points to the dataset containing the original JCL that you want to edit.

• SYSUT2: This DD statement defines the output destination, which in this case is set to the system output
(SYSOUT=*). This output will be the JCL for the included steps ready to be submitted.

• SYSPRINT: This is used for diagnostic messages from IEBEDIT.

• SYSIN: This contains control statements for IEBEDIT. The EDIT TYPE=INCLUDE, STEPNAME=(STEP03)
statement tells IEBEDIT to include only STEP03 from the original job.

When this job is executed, IEBEDIT reads the JCL from SYSUT1, includes only the specified step (STEP03), and then
writes the resulting JCL to SYSUT2. You can then review the output to verify that only the desired step is included and
ready to be executed.

Make sure that the step names in your control statements match exactly with the step names in the original JCL.
Adjust the STEPNAME parameter to target different steps as needed.

*****************************************************************************************

To retrieve the primary key details of a table in an IBM DB2 database, you can query the system catalog tables. DB2
uses a set of catalog views that store metadata about the database objects. To find details about the primary keys, you
can query the SYSCAT.KEYCOLUSE and SYSCAT.INDEXES views.

Here is an SQL query that joins these catalog views to get the primary key details for a specific table:

SELECT

k.TABSCHEMA,

k.TABNAME,

k.COLNAME,

i.INDNAME AS PK_NAME

FROM
SYSCAT.KEYCOLUSE k

JOIN

SYSCAT.INDEXES i

ON k.INDSCHEMA = i.INDSCHEMA AND k.INDNAME = i.INDNAME

WHERE

k.TABSCHEMA = 'YOUR_SCHEMA_NAME' -- Specify your schema name here

AND k.TABNAME = 'YOUR_TABLE_NAME' -- Specify your table name here

AND i.UNIQUERULE = 'P'

ORDER BY

k.COLSEQ;

Explanation of the Query:

• SYSCAT.KEYCOLUSE: This catalog view contains information about the columns that are included in the
keys (primary key, unique keys) of tables and indexes.

• SYSCAT.INDEXES: This view contains information about the indexes defined in the database, including
whether an index enforces a primary key constraint (UNIQUERULE = 'P').

Parameters:

• YOUR_SCHEMA_NAME: Replace this with the schema name of your table.

• YOUR_TABLE_NAME: Replace this with the name of the table for which you want to find the primary key
details.

Columns in the result:

• TABSCHEMA: The schema of the table.

• TABNAME: The name of the table.

• COLNAME: The name of the column included in the primary key.

• PK_NAME: The name of the primary key.

This query will give you the schema, table name, column names, and primary key name for the specified table, which
are part of the primary key. Make sure you have the necessary permissions to access these catalog views in your DB2
database.

*****************************************************************************************

In COBOL, defining a two-dimensional array involves using a multi-level data structure where you define an array of
arrays. This is accomplished by declaring a group item that includes multiple occurrences of another group item, each
having multiple occurrences of a data item. Here’s how you can define a two-dimensional array in COBOL:

Example: Defining a 5x3 Array

Let's define a two-dimensional array with 5 rows and 3 columns.

01 TWO-DIMENSIONAL-ARRAY.

05 ROWS OCCURS 5 TIMES.

10 COLUMNS OCCURS 3 TIMES.

15 CELL PIC X(10).

Explanation

• 01 TWO-DIMENSIONAL-ARRAY: This is the level number and name of the array.

• 05 ROWS OCCURS 5 TIMES: This defines the first dimension with 5 occurrences, representing the rows.
• 10 COLUMNS OCCURS 3 TIMES: Nested within ROWS, this defines the second dimension with 3
occurrences, representing the columns.

• 15 CELL PIC X(10): This is the actual data item stored in each cell of the array. Here, each cell can hold a
string of up to 10 characters (PIC X(10)).

Accessing Elements

To access or manipulate the elements of this two-dimensional array, you would refer to them by their indices. For
example, to move a value to the second row's first column, you would write:

MOVE 'Example' TO CELL OF COLUMNS (1) OF ROWS (2).

Notes

• Ensure that the indices you use are within the bounds of the array dimensions you have defined.

• COBOL arrays are 1-based by default, meaning indexing starts from 1.

This structure is typical in COBOL for representing two-dimensional arrays and can be adjusted for any size or type by
changing the occurrence values and the picture clause of the CELL.

*****************************************************************************************

Integrating COBOL applications with modern APIs, especially those involving message queue (MQ) systems like IBM
MQ, typically involves several components and layers of interaction. COBOL generally does not have built -in support
for making direct API calls to web services or handling HTTP requests. Therefore, indirect methods are used, such as
leveraging middleware, additional software layers, or external services that can interact with both COBOL
applications and web APIs.

Here’s a high-level overview of how a COBOL program can interact with an API via a message queue system:

1. Setup and Configuration of MQ

Firstly, set up and configure your MQ system (e.g., IBM MQ). This involves:

• Defining a queue manager.

• Creating queues that your COBOL application will use to send and receive messages.

• Ensuring network communication is secure and reliable between the MQ server and the COBOL application.

2. COBOL Application Interaction with MQ

In the COBOL application, you will typically use MQ API calls to interact with the message queue. The steps generally
include:

• Connecting to the Queue Manager: Establish a connection to the MQ queue manager.

• Opening the Queue: Open the queue that you will be using to send or receive messages.

• Writing Messages: Write messages to the queue, which contain the data or requests meant for the API.

• Reading Messages: Read messages from the queue, which might include API responses or data from other
sources.

Here is a simplified example of COBOL code for sending a message to an MQ:

EXEC CICS LINK PROGRAM('MQPUT') COMMAREA(MQ-DATA) LENGTH(DATA-LENGTH)

In this example, 'MQPUT' would be a program handling the MQ API calls to put a message onto a queue. MQ-DATA is
the data structure containing the message to be sent.

3. Middleware or Service Handling API Interactions

The middleware or a separate service layer is responsible for:

• Receiving messages from the MQ: The service listens to the queue for incoming messages.

• Making API Calls: It extracts necessary information from the messages and makes corresponding API calls
to the web service.
• Sending Responses Back to MQ: After receiving responses from the API, the service sends these back to
another MQ queue, from which the COBOL application can read.

4. Processing API Responses in COBOL

Once the API response is back in the MQ and read by the COBOL application, further processing can happen based on
the API’s response.

Additional Considerations

• Security: Ensure that all data transmitted between your COBOL application, MQ, and the API is secure. Use
encryption and secure channels.

• Error Handling: Robust error handling must be added to manage connectivity issues, data format errors, or
API failures.

• Performance: Monitor the performance impacts on your COBOL application and optimize the flow as
necessary.

Conclusion

Integrating COBOL with modern APIs through MQ involves a combination of traditional COBOL programming,
understanding MQ systems, and often a middleware layer that can translate between MQ messages and web API calls.
This setup allows COBOL applications, which are often running on mainframe systems, to communicate effectively
with modern, web-based systems.

*****************************************************************************************

Below is a simplified example of a COBOL program that interacts with another system or program via IBM MQ API
calls. This example assumes that the COBOL environment is set up to use IBM MQ and that you have the necessary MQ
libraries and copybooks available. The program will demonstrate how to put a message onto an MQ queue, which is a
common operation for interacting with other systems via message queues.

Prerequisites:

1. IBM MQ installed and configured.

2. Access to MQ-related COBOL copybooks (MQI).

3. Queue manager and queue names correctly configured.

Sample COBOL Program:

IDENTIFICATION DIVISION.

PROGRAM-ID. MQSender.

ENVIRONMENT DIVISION.

INPUT-OUTPUT SECTION.

DATA DIVISION.

WORKING-STORAGE SECTION.

COPY MQI.

01 MQ-HCONN USAGE HANDLE.

01 MQ-HOBJ USAGE HANDLE.

01 MQMD TYPE MQMD.

01 MQPMO TYPE MQPMO.

01 MQOD TYPE MQOD.

01 MQ-MSG TYPE MQBYTE128.


01 MQ-COMPCODE PIC S9(9) COMP.

01 MQ-REASON PIC S9(9) COMP.

01 MQ-MESSAGE PIC X(100) VALUE 'Hello, this is a test message from COBOL!'.

PROCEDURE DIVISION.

MAINLINE.

MOVE SPACES TO MQOD

MOVE 'QMGR' TO MQOD-QMName

MOVE 'QUEUE1' TO MQOD-ObjectName

CALL 'MQCONN' USING 'QMGR'

MQ-HCONN

MQ-COMPCODE

MQ-REASON.

IF MQ-COMPCODE = MQCC-FAILED

DISPLAY 'Connection to queue manager failed, reason code: ' MQ-REASON

GO TO END-PROGRAM

END-IF.

CALL 'MQOPEN' USING MQ-HCONN

MQOD

MQOO-OUTPUT

MQ-HOBJ

MQ-COMPCODE

MQ-REASON.

IF MQ-COMPCODE = MQCC-FAILED

DISPLAY 'Open queue failed, reason code: ' MQ-REASON

GO TO END-PROGRAM

END-IF.

MOVE MQ-MESSAGE TO MQ-MSG

MOVE LENGTH OF MQ-MESSAGE TO MQMD-MsgLength

CALL 'MQPUT' USING MQ-HCONN

MQ-HOBJ
MQMD

MQPMO

LENGTH OF MQ-MESSAGE

MQ-MSG

MQ-COMPCODE

MQ-REASON.

IF MQ-COMPCODE = MQCC-FAILED

DISPLAY 'Put message failed, reason code: ' MQ-REASON

ELSE

DISPLAY 'Message sent successfully.'

END-IF.

CALL 'MQCLOSE' USING MQ-HCONN

MQ-HOBJ

MQCO-NONE

MQ-COMPCODE

MQ-REASON.

CALL 'MQDISC' USING MQ-HCONN

MQ-COMPCODE

MQ-REASON.

END-PROGRAM.

STOP RUN.

Explanation:

• MQCONN: Connects to the MQ queue manager.

• MQOPEN: Opens the specified queue for output (sending messages).

• MQPUT: Puts a message onto the queue.

• MQCLOSE: Closes the queue.

• MQDISC: Disconnects from the queue manager.

Note:

• Replace 'QMGR' and 'QUEUE1' with your actual queue manager and queue names.

• Ensure the MQ-related copybooks are correctly referenced and included in your project.

• This program is a simplified example for educational purposes and may require adaptation to fit into a real-
world environment, including enhanced error handling and dynamic configuration.

This example should help you understand the basic structure and calls required to interact with IBM MQ from a
COBOL program.

*****************************************************************************************
The difference between UNION and UNION ALL in SQL lies in how they handle duplicates:

1. UNION: Eliminates duplicate records from the result set and returns only distinct records. This involves an
additional step to sort and remove duplicates, which can make it slower especially for large datasets.

2. UNION ALL: Includes all records from the queries and does not remove duplicates. It is generally faster than
UNION because it does not perform the extra step of removing duplicate entries.

*****************************************************************************************

To use MQSeries (now known as IBM MQ) in COBOL, you need to include the necessary MQSeries calls to open a
queue. Below is a simplified example of how you can use the MQOPEN API in a COBOL program to open a queue. This
example assumes you are familiar with IBM MQ basics and the environment is already set up to use MQ libraries.

Sample COBOL Program to Open an MQ Queue

IDENTIFICATION DIVISION.

PROGRAM-ID. MQOPENExample.

ENVIRONMENT DIVISION.

DATA DIVISION.

WORKING-STORAGE SECTION.

01 MQMGR-NAME PIC X(48) VALUE 'QM1'.

01 Q-NAME PIC X(48) VALUE 'MYQUEUE'.

01 HCONN USAGE POINTER.

01 HOBJ USAGE POINTER.

01 ODO PIC S9(9) COMP VALUE 0.

01 COMP-CODE PIC S9(9) COMP.

01 REASON PIC S9(9) COMP.

01 OBJDESC.

05 OD-OBJECTNAME PIC X(48).

05 OD-OBJECTQMR PIC X(48).

05 OD-OBJECTTYPE PIC S9(9) COMP VALUE 1.

01 MQOO PIC S9(9) COMP VALUE 17.

PROCEDURE DIVISION.

OPEN-QUEUE.

MOVE Q-NAME TO OD-OBJECTNAME

MOVE MQMGR-NAME TO OD-OBJECTQMR

CALL 'MQCONN' USING MQMGR-NAME

BY VALUE HCONN

COMP-CODE

REASON.

IF COMP-CODE NOT = 0
DISPLAY 'MQCONN FAILED, REASON CODE: ' REASON

GO TO END-PROGRAM

END-IF.

CALL 'MQOPEN' USING BY VALUE HCONN

OBJDESC

BY VALUE MQOO

BY REFERENCE HOBJ

COMP-CODE

REASON.

IF COMP-CODE NOT = 0

DISPLAY 'MQOPEN FAILED, REASON CODE: ' REASON

ELSE

DISPLAY 'QUEUE OPENED SUCCESSFULLY'.

END-IF.

END-PROGRAM.

CALL 'MQDISC' USING BY VALUE HCONN

COMP-CODE

REASON.

DISPLAY 'DISCONNECTED FROM QMGR'.

STOP RUN.

Explanation:

• MQCONN (Connect Queue Manager): Connects to the queue manager. HCONN is the connection handle.

• MQOPEN (Open Queue): Opens the queue. HOBJ is the output handle for the queue. MQOO defines the
open options (e.g., input shared, output).

• COMP-CODE and REASON: Used to handle completion and reason codes that are returned by MQ calls.

• MQDISC (Disconnect Queue Manager): Disconnects from the queue manager.

Notes:

1. Ensure the COBOL compiler and runtime environment are configured to link with the MQ libraries.

2. Error handling in this example is rudimentary; you may need more comprehensive error handling based on
your application's requirements.

3. Replace 'QM1' and 'MYQUEUE' with your actual queue manager and queue names.

4. Adjust the MQOO value based on the desired open options (e.g., for read, write, or both).

This example provides a basic framework. Depending on your specific requirements and the IBM MQ version,
additional parameters and settings might be necessary.

*******************************************************************************************************************
1. MQ Basics and Concepts

• Question: Can you explain the core components of IBM MQ and their roles?

• Answer: IBM MQ primarily consists of the following components:

o Queue Manager: Acts as the server or broker managing access to queues.

o Queues: Storage for messages under the control of the queue manager.

o Channels: Communication links between queue managers or between clients and queue
managers.

o Listeners: Processes that wait for incoming network connections to a queue manager.

o Topics (for pub/sub model): Distribute messages to multiple subscribers.

o Message: Basic unit of data that MQ transports.

2. MQ Administration

• Question: How do you monitor the health and performance of an IBM MQ environment?

• Answer: Monitoring can be achieved using tools such as IBM MQ Explorer, command-line tools like
'runmqsc', or automated monitoring tools like Tivoli Omegamon. Key metrics to monitor include queue
depths, channel status, and error logs. Setting up alerts for thresholds on these metrics helps in proactive
management.

3. Problem Solving

• Question: Describe a challenging problem you encountered with IBM MQ and how you resolved it.

• Answer: An example could involve diagnosing and resolving an issue where messages were stuck in a queue
because of a receiver application failure. The resolution involved setting up dead-letter queues to handle
undeliverable messages and modifying the receiver application to handle exceptions better.

4. Security

• Question: What measures would you implement to secure an IBM MQ environment?

• Answer: Security can be enforced through the use of SSL/TLS for channel encryption, implementing
CHLAUTH rules to control client connections, and using MCA (Message Channel Agent) user IDs for access
control. Additionally, securing queue manager commands and adopting the latest security patches and
updates are crucial.

5. MQ Programming

• Question: How do you handle poison messages in IBM MQ? What coding practices would you use to prevent
them?

• Answer: Poison message handling involves setting the 'Backout Threshold' on the queue and configuring a
backout queue where messages that exceed the threshold can be moved. This prevents repeated processing
attempts. Coding practices include proper exception handling and transaction management to ensure
messages are either fully processed or not at all.

6. High Availability and Disaster Recovery

• Question: Explain how you would configure a highly available IBM MQ setup. What strategies might you
employ for disaster recovery?

• Answer: High availability can be achieved using a multi-instance queue manager setup or a cluster of queue
managers. For disaster recovery, having a replicated environment in a geographically separate location,
combined with regular backups and a well-tested failover process, is essential.

7. MQ Interoperability

• Question: How would you integrate IBM MQ with other enterprise systems or applications? Provide an
example.
• Answer: IBM MQ can be integrated using APIs available for various programming languages or through
configurable message flows using IBM Integration Bus (IIB). For example, integrating MQ with a database
system could involve using IIB to listen to MQ messages and insert data into the database.

8. MQ Troubleshooting

• Question: If messages are not being delivered, what diagnostic steps would you take to identify and resolve
the issue?

• Answer: First, check the channel status to ensure it is running. Next, verify the queue depths to ensure they
are not full. Reviewing the error logs for any reported issues is also crucial. Configuring event messages for
channel or queue managers can aid in early detection of issues.

9. MQ Enhancements and Updates

• Question: Are you familiar with any recent updates to IBM MQ? How have you adapted these new features
in your projects?

• Answer: Recent updates could include enhanced security features, improved monitoring capabilities, or
new administrative APIs. For instance, adapting the new REST API for admin tasks could improve
automation in deployments and system management.

10. Performance Optimization

- Question: Describe how you would optimize performance in an IBM MQ environment. What specific settings or
configurations would you consider?

- **Answer: ** Performance optimization can involve tuning channel configurations, adjusting buffer sizes, and
optimizing the queue manager's use of system resources. Using non-persistent messages when possible can also
significantly enhance throughput.

These questions and answers provide a robust framework for assessing the expertise of candidates in handling IBM
MQ in various scenarios. Adjust the complexity of the scenarios and depth of answers based on the specific role
requirements and the candidate's experience level.

*******************************************************************************************************************

CICS (Customer Information Control System) is a transaction server that runs primarily on IBM mainframe systems
under z/OS and z/VSE. It is used extensively in industries such as banking and finance for transaction processing.

Basics:

1. Transaction Management: CICS handles transactions which are units of processing initiated by a single
user request. It ensures the integrity and consistency of data.

2. Multi-tasking: CICS can handle thousands of transactions concurrently, making it highly efficient for high-
volume applications.

3. Program Management: It manages the lifecycle of running programs and provides services like program
loading, task creation, and inter-program communication.

Map and Mapset:

A map in CICS is essentially a predefined layout that determines how data appears on a terminal screen. A mapset is a
collection of one or more maps.

Creation of Map and Mapset:

1. Design the Map: You can design a map using tools like BMS (Basic Mapping Support). BMS is used to define
the fields (both input and output) on the screen. Each field can be characterized by attributes such as length,
position, and protection status (whether it can be modified by the user).

2. Generate the Map: Once designed, the map needs to be translated into a format that CICS can use. This is
done by a BMS macro assembler that generates a load module and a symbolic map.

3. Symbolic Map: This is a COBOL or PL/I copybook generated by the BMS assembler. It includes names and
attributes of all the fields defined in the BMS map. This copybook is used in application programs to refer to
map fields.

Data Handling Between Map and Program:


1. SEND MAP: This CICS command is used to display the map on the user's terminal. It transfers data from the
application program working storage to the terminal.

2. RECEIVE MAP: This command is used to fetch user input from the terminal. It transfers data from the
terminal to the program’s working storage, specifically into the areas defined in the symbolic map.

Interaction with COBOL/PL1:

• When a map is sent or received, the data fields defined in the symbolic map are used in COBOL or PL1
programs to manipulate screen data.

• Example in COBOL:

EXEC CICS SEND MAP('MAPNAME') MAPSET('MAPSETNAME') END-EXEC.

EXEC CICS RECEIVE MAP('MAPNAME') MAPSET('MAPSETNAME') INTO(data-area) END-EXEC.

• The data-area typically corresponds to the data structure defined by the symbolic map, allowing the
program to access or modify field values based on user interactions.

By managing these interactions efficiently, CICS supports complex and high-volume transaction processing
environments, interfacing seamlessly with COBOL and PL/I programs which are common in enterprise applications
on mainframes.

Creating a MAP and MAPSET in CICS involves several steps including designing the map, coding it using BMS (Basic
Mapping Support), and then compiling it. Below is an example of how to create a simple map using BMS macros.

BMS Map Coding Example

Let's say you want to create a map for a login screen with fields for username and password.

1. Define the Mapset and Map: This includes specifying the size of the screen and the map attributes.

DFHMSD TYPE=&SYSPARM,MODE=I NOUT,LA NG=COB OL,STORAGE=AUTO,

CTRL=(FREEKB,FRSET),TIOAPFX=YES,DSATTS=(COLOR ,HILIGHT)

DFHMDI SIZE=(24,80),LINE=1,COLUMN=1

2. Define the Fields: Specify each field's location, length, and attributes.

DFHMDF POS=(5,10),LENGTH=15,ATTRB=(ASKIP,NORM,IC),INITIAL='Username: '

DFHMDF POS=(5,26),LENGTH=20,ATTRB=(UNPROT,NOR M),JUSTIFY=(LEFT),NAME=USER NAME

DFHMDF POS=(7,10),LENGTH=15,ATTRB=(ASKIP,NORM,IC),INITIAL='Password: '

DFHMDF POS=(7,26),LENGTH=20,ATTRB=(UNPROT,NOR M),JUSTIFY=(LEFT),NAME=PA SSWORD

3. End the Definitions: Signal the end of this map and mapset.

DFHMSD TYPE=FINAL

Explanation of Code

• DFHMSD starts the definition of a mapset or a map. TYPE=&SYSPARM allows passing the type parameter
when assembling the map. MODE=INOUT specifies that the map can be used for both input and output.

• DFHMDF defines a field within the map. POS=(row,column) sets the position of the field. LENGTH is the size
of the field. ATTRB sets attributes like ASKIP (auto-skip), NORM (normal), IC (initial cursor position), and
UNPROT (unprotected, meaning the field can be edited).

• INITIAL provides an initial value or label for the field.

• NAME assigns a symbolic name to the field, which is used in the COBOL program to reference the field.

Compiling the Map

1. Assemble the Map: Use JCL (Job Control Language) to compile the BMS source code into a load module and
a copybook. This is typically done using the assembler provided in your mainframe environment.
//JOBNAME JOB (ACCT),'MAP COMPILE',CLASS=A,MSGCLASS=A

//STEP1 EXEC PGM=DFHMAPS,REGION=0M

//SYSPRINT DD SYSOUT=*

//SYSIN DD *

{Insert BMS source code here}

/*

2. Copybook: The assembler also generates a symbolic map, which is a COBOL copybook that can be included
in COBOL programs to reference the map fields.

Using the Map in a COBOL Program

With the map and mapset created, you can use them in a COBOL program as follows:

IDENTIFICATION DIVISION.

PROGRAM-ID. LOGIN.

DATA DIVISION.

WORKING-STORAGE SECTION.

01 WS-MAP.

COPY LOGINMAP.

PROCEDURE DIVISION.

EXEC CICS SEND MAP('LOGINMAP') MAPSET('LOGINSET') END-EXEC.

EXEC CICS RECEIVE MAP('LOGINMAP') MAPSET('LOGINSET') END-EXEC.

IF USERNAME = 'user' AND PASSWORD = 'pass'

PERFORM SUCCESSFUL-LOGIN

ELSE

PERFORM FAILED-LOGIN

END-IF.

In this COBOL snippet, LOGINMAP and LOGINSET refer to the names assigned to the map and mapset during their
creation. The fields USERNAME and PASSWORD are used directly in the program, reflecting the user's input from the
CICS screen.

The EXEC CICS SEND MAP command in CICS is used to send a map to a terminal. This command transfers data from
the application program's working storage (where it has been prepared) to the terminal screen. It is commonly used
to display information or forms for user interaction.

Syntax:

EXEC CICS SEND MAP(map-name) MAPSET(mapset-name) [options] END-EXEC.

Key Parameters:

• MAP(map-name): Specifies the name of the map to be sent.

• MAPSET(mapset-name): Indicates the mapset that the map belongs to. A mapset is a collection of maps.

Optional Parameters:

• DATAONLY: Sends only the data for the map, without refreshing the entire screen. Useful for updating
specific fields without redrawing static content.
• ERASE: Clears the screen before displaying the new map.

• ERASEUP: Clears the unprotected fields on the screen before displaying the new map.

• CURSOR: Positions the cursor at a specified field when the map is displayed.

• FREEKB: Frees the keyboard after sending the map, allowing the user to start entering data immediately.

Example Usage in COBOL:

Here's a simple example of how the EXEC CICS SEND MAP might be used in a COBOL program to display a user login
screen:

EXEC CICS SEND MAP('LOGIN') MAPSET('USERAUTH') ERASE END-EXEC.

In this example:

• The map named LOGIN from the mapset USERAUTH is sent to the terminal.

• The ERASE option is used to clear the terminal screen before the map is displayed, ensuring that the map
appears on a clean screen.

Purpose and Use:

This command is crucial for interactive applications running on CICS. It enables dynamic data presentation and user
interactions via forms. After the map is sent, the program typically waits for user input, which can be processed using
the EXEC CICS RECEIVE MAP command to capture the data entered by the user.

CICS Interview Questions and Answers for Experienced Candidates

1. What is CICS?

o CICS (Customer Information Control System) is a transaction processing system developed by IBM.
It is designed to support rapid, high-volume online transaction processing. A CICS transaction is a
unit of processing initiated by a single request that may affect one or more objects. This processing
is usually interactive (screen-oriented), but background transactions are possible.

2. How does CICS handle task management?

o CICS handles task management through the Task Control Block (TCB) which manages the
execution of each task. Tasks in CICS are prioritized, and the system uses a dispatcher to manage
the execution of these tasks based on their priority and resource availability.

3. Can you explain pseudo-conversational programming in CICS?

o Pseudo-conversational programming in CICS refers to the method where the task initiated by the
terminal user is terminated every time a response is sent to the user, but the session remains
active. The next action by the user starts a new task. This method conserves system resources by
freeing them between interactions while maintaining the user's illusion of a continuous
conversation.

4. What are some common CICS tables and their purposes?

o Program Control Table (PCT): Manages transaction IDs and their corresponding programs.

o Processing Program Table (PPT): Contains information about programs like their location and
size.

o File Control Table (FCT): Manages file references and their statuses.

o Terminal Control Table (TCT): Manages terminal devices connected to the system.

5. How does CICS achieve data integrity?

o CICS ensures data integrity through various facilities like locking mechanisms, journaling, and
using Atomicity, Consistency, Isolation, Durability (ACID) properties for database access. CICS uses
Resource Recovery Services (RRS) to ensure that updates to resources are committed or backed
out synchronously.

6. What is the difference between START and XCTL CICS commands?


o The START command is used to initiate a new task. It is typically used for batch processing and can
schedule a program to run at a later time. XCTL, on the other hand, is used to transfer control to
another program within the same task, thereby replacing the currently running program. XCTL
does not return control to the calling program.

7. Explain the use of TSQ and TDQ in CICS.

o Transient Data Queue (TDQ): Used for temporary storage of data. Data can only be read once and
is deleted after reading. TDQs are typically used for spooling purposes.

o Temporary Storage Queue (TSQ): Used for storing data that can be accessed multiple times and
can be explicitly deleted when not needed. TSQs are stored either in main memory or auxiliary
storage, making them more flexible for repeated access.

8. What are CICS Maps?

o CICS Maps are BMS (Basic Mapping Support) generated data structures that define the layout of
data on a terminal screen. Maps handle the input and output from terminal screens to the
application program and vice versa. They simplify the handling of screen formats for programmers.

9. How do you handle errors in CICS programs?

o Error handling in CICS programs is primarily done using the HANDLE CONDITION command,
which specifies the action to take when a particular error condition occurs. EIB (Execute Interface
Block) provides error codes that can be checked after each CICS command to determine the
success or failure of the command.

10. What improvements have you seen in recent versions of CICS?

o Recent versions of CICS have focused on improving integration with modern technologies,
enhancing scalability, and increasing robustness. Improvements include support for web services,
Java, and RESTful APIs, better security features, and enhanced diagnostic and monitoring
capabilities.

11. What is a COMMAREA and how is it used in CICS applications?

• COMMAREA (Communications Area) is a block of memory used to pass data between different programs
within the same CICS task. It's particularly useful in a pseudo-conversational environment where data needs
to be retained across multiple transaction invocations within the same task.

12. Describe how you would manage state in a pseudo-conversational CICS application.

• Managing state in a pseudo-conversational CICS application typically involves using the COMMAREA to pass
data between program invocations or using Temporary Storage Queue (TSQ) to store data externally. This
approach helps maintain user context without holding onto system resources between requests.

13. What are some typical challenges you face while developing CICS applications and how do you
overcome them?

• Typical challenges include managing complex state across transactions, ensuring data integrity, and
optimizing performance. Overcoming these challenges involves using efficient data handling techniques,
following best practices for CICS application design, and leveraging CICS facilities such as TSQs, TDQs, and
proper use of locking mechanisms.

14. Can you explain the use and benefits of EXEC CICS LINK and XCTL?

• EXEC CICS LINK is used to call another program while preserving the calling program's context, allowing the
called program to return back to the calling program. This is useful for modular application development.
EXEC CICS XCTL transfers control to another program within the same task but does not preserve the calling
program's context. XCTL is typically used for switching to a new program when return to the calling program
is not necessary, making it slightly more efficient than LINK in such cases.

15. How do you implement error handling in a CICS program?

• Error handling in CICS is implemented by using the HANDLE CONDITION command to specify error
handling routines for anticipated errors. Additionally, checking the EIBRESP after each CICS command can
help in determining the success of the command and handling exceptions accordingly.

16. What is CICS web support and how have you utilized it?
• CICS web support allows CICS transactions to be exposed as web services, enabling HTTP, HTTPS, or SOAP-
based access to CICS transactions. This feature can be utilized to integrate CICS applications with modern
web-based systems, increasing the accessibility and interoperability of legacy applications.

17. Discuss the role and configuration of CICS Intercommunication.

• CICS Intercommunication refers to the ability of CICS systems to communicate and share resources with
each other. This is configured using CICS Connection (CONN), CICS Interdependency (INTRA), and
definitions in the CICS region to facilitate resource sharing, program linking, and data communication across
CICS regions.

18. What debugging tools and techniques do you use for CICS applications?

• Common debugging tools for CICS applications include CEDF (CICS Execution Diagnostic Facility), which
allows step-by-step execution of transactions, displaying each command and its results. Other tools like CICS
Interceptor and dump analysis tools are also essential. Debugging techniques might involve setting
breakpoints, tracing transaction flows, and analyzing abend codes.

19. Explain the significance and usage of ENQ and DEQ commands in CICS.

• ENQ (Enqueue) and DEQ (Dequeue) commands in CICS are used for managing resource contention by
controlling access to resources. ENQ is used to request control of a resource, preventing other tasks from
accessing it simultaneously, while DEQ releases a resource. These commands are crucial for maintaining
data integrity in environments where multiple tasks might attempt to modify the same data.

20. How do you optimize CICS applications for performance?

• Optimizing CICS applications for performance can involve several strategies, such as minimizing the use of
I/O operations, efficiently managing task lifecycles, optimizing TSQ and TDQ usage, and tuning CICS
parameters for optimal throughput. Profiling and monitoring transaction execution can also provide insights
for potential performance improvements.

****************************************************************************************************************

A JCL (Job Control Language) job card is the first statement in a JCL sequence and serves as the entry point for
defining how a job is processed by the z/OS operating system on an IBM Mainframe. The job card provides
essential information such as the job name, accounting information, and processing requirements. Here is a
breakdown of a typical JCL job card:

//JOBNAME JOB (ACCOUNT),'DESCRIPTION',CLASS=A,MSGCLA SS=X,MSGLEVEL=(1,1),NOTIFY=&SYSUID

Components of the Job Card:

1. //JOBNAME: This is the name of the job. It must start with two slashes followed by a name (up to 8
characters). This name is used to identify the job within the system.

2. JOB: This keyword indicates the beginning of a job.

3. (ACCOUNT): The accounting information enclosed in parentheses is used for billing or tracking job resource
usage. It's often required in real-world systems.

4. 'DESCRIPTION': A short description of the job. This is optional and enclosed in single quotes.

5. CLASS=A: The job class specifies the priority and resource allocation for the job. Classes are predefined by
the system administrator. Class A is just an example; the actual classes available can vary.

6. MSGCLASS=X: This determines where the system should send the job's output. For example, MSGCLASS=X
might direct output to a specific output queue or device.

7. MSGLEVEL=(1,1): Specifies which messages to output in the job log. The first number controls the output of
the job statement itself, and the second number controls the messages related to the job steps. For example,
(1,1) will output both job and step messages.

8. NOTIFY=&SYSUID: Specifies who to notify upon job completion. &SYSUID is a system variable that resolves
to the User ID of the person submitting the job, meaning the user will be notified.

9. REGION=: Specifies the maximum amount of memory the job can use. For example, REGION=4M allocates
up to 4 megabytes of memory for the job.
10. PRIORITY=: This parameter is used to override the default priority within the job class. It is typically used
in systems that have priority scheduling enabled.

11. TIME=: Limits the total execution time for the job. This can be specified in minutes or seconds (e.g.,
TIME=10 for 10 minutes or TIME=(,30) for 30 seconds).

12. TYPRUN=: Specifies the type of run. Common values include:

HOLD — the job is not to be run until it is specifically released.

SCAN — the job is to be checked for syntax errors but not executed.

13. COND=: Sets the condition under which a job or step should be executed. For example, COND=(4,LT) means
the job step should only execute if the return code of the previous step is less than 4.

14. RESTART=: Specifies a step from which to restart the job in case of a previous job failure. For example,
RESTART=STEP2 restarts the job from step named STEP2.

15. USER=: Specifies the user ID under which the job should run, which can be different from the user ID that
submitted the job.

16. PASSWORD=: Accompanies the USER= parameter to provide a password for the specified user ID.

17. GROUP=: Specifies a user group for the job, which can be used for security or organizational purposes.

18. ACCOUNTNG=: An alternative or additional parameter to ACCOUNT for providing accounting information.

Each part of the job card instructs the mainframe how to handle the job, making it a crucial component of JCL
scripts in mainframe operations. This setup helps in managing, scheduling, and executing batch jobs efficiently.

*********************************************************************************************************************

In mainframe environments, particularly when working with COBOL programs and DB2, a common issue during
precompilation or compilation is the "timestamp mismatch" or "consistency token mismatch." This issue occurs when
the timestamp of the DB2 package does not match the timestamp in the DBRM (Database Request Module).

To address this issue during the precompilation phase, you can use specific options depending on the tools and
environment you are working with. Here are a couple of approaches:

1. Using the REBIND Option:

o If you are using DB2, you can rebind the package to update the timestamp. This isn't exactly an
"ignore" option, but it resolves the timestamp mismatch by ensuring that the package and DBRM
timestamps are synchronized. The command can look something like this:

o REBIND PACKAGE(package-name) MEMBER(member-name)

2. Bind Options:

o When you bind or rebind a plan or package, using options like ACTION(REPLACE) can help
overwrite existing packages regardless of timestamp mismatches. This approach essentially
updates the package with the new version of the DBRM, resolving any timestamp issues.

3. Precompiler Options:

o Certain precompilers may have options to suppress timestamp mismatch errors. For example,
when precompiling a COBOL-DB2 program, check for flags or options in the precompiler that can
be set to ignore or bypass timestamp validation. This would be specific to the tools and software
versions you are using.

4. Automation Scripts:

o In some development environments, scripts or tools are used to automate the steps of dropping
the existing package, precompiling the DB2 program, and then binding it again with a new
timestamp. This helps in environments where frequent changes are made to DB2 programs.

5. Version Management Tools:

o Use version management or schema evolution tools that handle timestamp mismatches as part of
their deployment process. These tools can automate the process of adjusting DB2 packages and
related objects when deploying updates to DB2 SQL embedded programs.
To determine the specific flags or options available in a precompiler to ignore or bypass timestamp validation,
you would need to refer to the documentation specific to the precompiler you are using. Since precompilers can
vary widely depending on the environment (such as IBM DB2, Oracle, or others), the manual or user guide for
your specific precompiler will be the most accurate source.

For IBM DB2 environments, which are common in mainframe settings, here are some steps and considerations:

1. Precompiler Documentation:

o Check the IBM DB2 precompiler documentation for options related to timestamp mismatch
handling. IBM regularly updates its documentation online, and it can be accessed through the IBM
Knowledge Center or similar resources.

2. Ignore Timestamp Mismatch:

o Typically, standard SQL precompilers do not have an option to directly ignore timestamp
mismatches because this is a fundamental aspect of ensuring data integrity and consistency in
applications that interact with DB2.

o However, you can manage how the application handles these errors in your application logic or
through your deployment scripts, as mentioned previously (e.g., using REBIND or BIND REPLACE
options).

3. Compiler Options:

o While not directly related to ignoring timestamp mismatches, ensuring that your compiler and
precompiler settings are correctly aligned can help prevent these issues from occurring. For
example, ensuring that the LEVEL option in the BIND command matches the application
expectation.

4. Consult with DBAs:

o Database Administrators (DBAs) often have scripts or processes in place to handle schema changes
and version mismatches. Consulting with them might provide a robust solution to handle
versioning issues, including timestamp mismatches.

5. Automation and Tooling:

o Consider using modern CI/CD pipelines and automation tools that handle these aspects of
database operations. Tools like Liquibase, Flyway, or custom scripts can automate the process of
managing DB2 package versions and can include steps to handle or bypass timestamp issues
where appropriate safely.

*********************************************************************************************************************

In COBOL, the equivalent of the Db2 data type DECIMAL (9,3) is S9(6)V9(3). This data type in COBOL represents a
signed numeric value with 6 digits before the decimal and 3 digits after the decimal. The letter S indicates that the
number is signed, and the V (implied decimal) denotes the location of the decimal point.

*********************************************************************************************************************

Here is a list of some common IBM Db2 data types along with their equivalent COBOL data types, focusing on numeric
and decimal types:

1. Db2: INTEGER or INT

o COBOL: S9(9) COMP

o Represents a 4-byte integer.

2. Db2: SMALLINT

o COBOL: S9(4) COMP

o Represents a 2-byte integer.

3. Db2: BIGINT

o COBOL: S9(18) COMP

o Represents an 8-byte integer.


4. Db2: DECIMAL (5,0)

o COBOL: S9(5)

o Decimal without any fractional part.

5. Db2: DECIMAL (7,2)

o COBOL: S9(5)V9(2)

o Decimal with two places after the decimal.

6. Db2: DECIMAL (9,3)

o COBOL: S9(6)V9(3)

o Decimal with three places after the decimal.

7. Db2: DECIMAL (12,4)

o COBOL: S9(8)V9(4)

o Decimal with four places after the decimal.

8. Db2: FLOAT or DOUBLE

o COBOL: COMP-2 or COMP-1

o Floating-point numbers; COMP-1 for single precision, COMP-2 for double precision.

9. Db2: CHAR(n)

o COBOL: X(n)

o Fixed-length character string.

10. Db2: VARCHAR(n)

o COBOL: X(n) with usage clause depending on the implementation (e.g., DISPLAY or SQL
TYPE IS)

o Variable-length character string.

11. Db2: DATE

o COBOL: 9(8) or X (10)

o Date type; numeric or character representation depending on the format used.

12. Db2: TIME

o COBOL: 9(6) or X (8)

o Time type: numeric or character representation depending on the format used.

13. Db2: TIMESTAMP

o COBOL: 9(26) or X (26)

o Timestamp type; numeric or character representation depending on the format used.

You might also like