Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
118 views
31 pages
Ai Unit 6 Techknow
Ai textbook for Sppu Students TE Ai and DS
Uploaded by
Atharva Kulkarni
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save AI UNIT 6 TECHKNOW For Later
Download
Save
Save AI UNIT 6 TECHKNOW For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
118 views
31 pages
Ai Unit 6 Techknow
Ai textbook for Sppu Students TE Ai and DS
Uploaded by
Atharva Kulkarni
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save AI UNIT 6 TECHKNOW For Later
Carousel Previous
Carousel Next
Download
Save
Save AI UNIT 6 TECHKNOW For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 31
Search
Fullscreen
PLANNING a utomated Plan \ autor = wring, Classical Planning, Algorithms for Classical Planning, Heuristics for Planning, Hierarchical panning, Planning and AC inistic Domai , anning, Planning and Acting in, Nondeterministic Domains, Time, Schedules, and Resources, Analysis of Planning Approaches, Limits of AL, Ethies of Al, Future of Al, Al Components, Al Architectures. Introduction + Planning various tasks is a part of day to day activities in real world. Say, you have tests of two different subjects an one day then, you will plan your study timetable as per your strengths and weakmessesin those two subjects + Also you must have learnt about various scheduling algorithms (eg. ist in ist out) in operating systems subject and how printers plan/ schedule their printing tasks based on tasks importance while printing. + ‘These examples illustrate how planning is important: We have seen that artificially intelligent systems are rational systems. So, devising a plan to perform actions becomes a part of creating an artificially intelligent agent. When we think about giving intelligence to a system or device, we have to make sure that it priorities between given activities or tasks. + Inthis chapter we are going to learn about how the machines can become more inteligent by using planning ok while performing various actions. 6.1 Introduction to Automated Planning Planning in Artificial I to-accomplish the piven taTBst ‘Take example of driver who has to pick up two people from two different places then same time. ‘There is one more defiiti ‘with a sequence of actions to accomplishWH Artivictal intettigence (SPPU) Simple Planning Agent + Take example of an agent which can be a coffee maker, a printer and a malling system, also assume that t are 3 people who have access to this agent. Suppose at same time if all 3 users of an agent give a command to execute 3 different tasks of coffee making printing and sending a mall. lala aes * Then as per definition of planning, agent has to decide the sequence of these actions. Fig. 6.1.1 : Example of a Planning Problem + Fig. 6.1.2 depicts a general diagrammatic representation of a planning agent that interacts with environment with its sensors and effectors/actuators. When a task comes to this agent it has to decide the Sequence: of actions to be taken and then accordingly execute these actions. Fig. 6.1.2 : Planning agent AE What is planning problem ? * We have seen in above section what information is available while fgrmulating a planning problem and what results are expected, Also it is understandable here that, states of an agent correspond to the probable sirounding environments while the actions and goals of, al.agent are specified based on logical formalization. - 7 = =Artificial Lotelligence (SPPU) 3 Planning + Also we have learnt about various type: '$ of intelligent agents in chapter 2. Which shows that, to achieve any. goal amagent has to answer few questions ike “what wil be the effect ofits actions’, “how it wil affect the upcoming ‘aettons", etc. ‘This illustrates that an agent must be able to provide a proper reasoning about its future actions, states of surrounding environments, etc, ‘Tonsider simple Tic-Tac-Toe game.\A Player cannot win a game in one step, he/she has to follow sequence of actions to win the game, While taking every next step he/she has to consider old steps and has to imagine the probable future actions of an opponent and accordingly make the next move and at the same time he/she should also consider the consequences of his/her actions, + Aciassical planning has the following assumptions about the task environment: © Fully Observable: Agent can observe the current state of the environment. ‘Deterministic: Agent can determine the consequences of its actions. © Finite : There are finite set of actions which can be carried out by the agent at every state in order to achieve the goal, 9 Static: Events are steady, External event which cannot be handled by agent is not considered. 4 © Discrete : Events of the agent are distinct from starting step to the ending (goal) state in terms of time. * So, basically a planning problem finds the sequence of actions to accomplish the goal based on the above assumptions. | Also note that, goal can be specified as a urtion of sub-goals. a ‘Take example of ping pong game where, points are assigned to opponent player when a player fails to return the ball within the rules of ping-pong game. There can a best 3 of 5 matches where, to win a match you have to win 3 ‘James and in every game you have to win with a minimum margin of 2 points. @. How planning problem differs from search problem 7 performed). the need of the system. Planning agents can deal withGt Actiicial Intelligence (SPPU) 6.2.2 Goal of Planning "3 « ~~ We plan activities s in order to.achigye some goals. Main goal can be divided into sub-goals to ‘make planning more : efficient * Take example of a_grocery. shopping.at supermarket, suppose you want to buy milk, bread and supermarket, then your initial state will be -“at home and goal state will be- “get milk, bres eee © Now if you look at the Fig, 6.2.1 you will understand that branching factor can be enormous depending the sot of actions, fore g, Wateh TY, read book, ete, available at that point rime ee fb breado | » H finistr change channel) increase/deorease s volume Fig, 6.2.1 : Supermarket examples to understand need of planning * Thus branching factor can be defined as a set of all probable actions at any state, set can be very large, such as iivthe supermarket example or block problem. If the domain of probable actions increases, the branching factor will also increase as they are directly proportional to each other, this will result in the increase of search space. + Toreach the goal state you have to follow many steps, if you consider using heuristic fanctions, then you have to remember that it will not be able to eliminate states; these Functions will be helpful only for guiding the search of states. + So it becomes difficult to choose best actions. (ie. even if we go to supermarket we need to make sure thatall three listed items are picked, only then goal state can be achieved) # Asthere are many possible actions and itis difficult to describe every state, and there can be combined goals (as seen in supermarket example) searching is inadequate to achieve goals efficiently, In order to be more efficent planning is required. == © In above sections, we discussed that planning requires explicit knowledge that means in case of planning we need to know the exact sequence abastions ‘which will be useful in order to achieve the goal. ‘Advantage of planning is that, the order of planning and the order of execution need not be same. For example, ee you can plan how to pay bills for grocery before planning to go to supermarket. © Another advantage of planning is that you can make use of divide and conquer policy by dividing /decomy i cantcaiecaci OS SE A -6.3 Artificial intelligence (SPPU) There are many approaches to solve planning problems, Following are few major approaches or algorithms used or planning, Planning with state space search, Partial ordered planning, Hierarchical planning / hierarchical decomposition (HTN planning). Planning situation caleulus/ planning with operators, Conditional planning, Planning with operators. Planning with graphs. > Planning with propositional logic. ° Planning reactive. Out of these major approaches we will be learning about following algorithms in detail : o-~ Planning with state space search © _ Partial ordered planning and © Hierarchical planning / Hierarchical decomposition (HTN planning). © Conditional planning, Planning with operators. Planning Graphs Planning graph is a special data structire which is \s oreo used to get better accuracy. It isa directed graph rior ‘estimates Any ofthe search technique can make use of planning graphs. ABO GRAPHPLAN can be used to extract seuton dgArtificial Intelligence (SPPU) ‘Action(Eat(Apple), PRECOND : Have(Apple) ERFRCT : "Have(Apple) » Ate(Apple)) Action(Cut(Apple), PRECOND :~ Have( Apple) EFFECT: Have(Apple)) by Ly Ay Sp Have (Apple) Have (Apple) Have (Apple) _ Have (Appley = Have (Appia) Ate (Apple) ‘Ato (Apple) — Ate (Apple) fF — Ate (Apple) — Ate (Apple) Fig. 6.3.1 Startat level L and determine action level ‘AO and next level L- go AO St © Ap>2allactions whose prerequisite is satisfied at previous level. Arr 2 s rove (alee) Horde) > hy ae Nees ace Connect precondition and effect of actions lo Inaction is represented by persistence actions: Level Ap contains the possible actions. Conflicts between actions are shown by mutual exclu «5 Level Ly contains all literals that could result from pic Conflicts between literals which cannot occir mutual exclusion links. the mutual exclusion Ii sion links. king any subset of action: together (as a effect of selection a inks are the constraints that define se (FRO) — x sin Level Ag. this set of states. .ction) are represented by © Lydefines multiple states and Continue until two consecutive Jevelsare the same or contain the Same amount of literals. ‘A mutual exclusion relation holds between ‘A mutual exclusion relation holds two actions when: between two literals when: ‘One action cancels out the effect of another action. © [fone literal is the negation of the other literal oR. one of the effects of action is negation of preconditions | Ifeach possible action pair that could achieve of other action. the literal is mutually exclusive. one of the preconditions of one action is mutually - ‘exclusion with the precondition of the other action. GRAPH PLAN algorithm ‘ GRAPH PLAN can directly extract solution from planning graph with the help of following algorithm + function GRAPHPLAN(problem) return solution or failure graph — INITIAL-PLANNING-GRAPH(probfem) goals — GOALS[problem] Joop do Fe iiior Planning if goals ll non-mutex in last level of graph then do solution < EXTRACT-SOLUTION(graph, goals, LENGTH(graph)) ifsolution # failure then return solution else if NO-SOLUTION-POSSIBLE(graph) then return failure graph « EXPAND-GRAPH(graph, problem) Properties of planning graph +/Ifgoalis absent from last level then goal cannot be achieved! + Ifthere exists a path to goal then goal is present in the last level * If goal is present in last level then there may not exist any path. 6.4 Planning/as'State-Space:Search * We have seen example of an agent that can perform three tasks of printing, sending a a and waking ones namely lets’ call this agent as office agent. When this office agent gets order from three people at a same time to perform these three different tasks then, Jet us see how planning with state space search problem will look if we havea finite space. * You can understand from Fig. that the office agent is at location 250 on the state space grid. When he gets a task he has to decide which task can be performed more efficiently in lesser time. apcRrese” of Store Prmervier pncest re. Prbier™, 1) Stake Gye ce seoech \s -vhe: @ ve hove +e wepeesers Que hove +p Qraty7eartificial Invelligene But to do this it should be aware the locations of people wh pahould be aware cts" the logations of the required deuices L-world problems because, it requires complete des A «State space search is unfavourable for solving rea every searched state, also search should be carried out locally. a a o search should be carried OUEOSY : 2 sresentations for a state + of it’s own current locatt CS There tan be two ways of rep) 1 2 ‘Complete world description Path from an initial state 1. Complete world description jgnment ofa value to each previous suggestion, © Description is available in terms of 4h assi a suggestion that defines the state. © Orwe can say that description is available as o” Drawback of this types is that it requires a large amount of SPACE: © As per the name, path from an initial state gives the sequence of actions which are used to reach a state from an initial state. sin a state can be deduced from the axiom which specifies the effects of an actions. © Drawback of these types is that it,does not explicitly specify “What holds in every state”. Because of this it can be difficult to determine whether two states are same: © _ Inthis case, what hold: Flopresentation of stale Path froman Initial Stato |Complete World) ‘Deseription Fig, 6.4.2 : Representation of states ; __!_ a oe jo rose Let us take an example of a water jug problem © We have two jugs, a4 gallon one and a 3- gallon one. Neither bas.any measuring markers on it There is. a pumy chat can be used to filthe jugs with water_how canyou get exact 2 gallons of water into the 4 - gallon jug? « ‘The state space for this problem can be described as of ordered pairs of integers (x, y), such thats = 0.4 23 5 (% y), 1, or 4, representing the number of gallons of water in the 4- gallon jug and y_= 0.1, 2 or 3. representing the quantity of water in the 3- gallon ju e Tie Start state is (0, 0). The goal state is (2, n) for any value of n, since the problem does not specify how many gallons need to be in the 3- gallon jug. = © The operators to be used to solve the problem can be described as shown bellow. They are re] whose left side are matched against the current state and whose right sides describe the new state that results: from applying the rule. 4wv 7 Rule set 1 1. 12. (2y)—> (0x) empty the 2 gallon in the 4 gallon on the g Production for the water jug problem Artificial Intelligence (SPPU) Planning (xy) — (Ay) fill the 4- gallon jug Ifx<4 (sy) —*(.3) fill the 3-gallon jug Ifx<3 (xy) ——(«-dy) pour some water. ‘out of the 4- gallon jug Ifx>0 (
0 &y) ——* (Oy) empty the 4- gallon jug on the ground Ifx>0 (%y) —> (x0) empty the 3- gallon jugonthe ground Ity>0 (
=4 andy >0 jug until the 4-galoon jugis full (xy) —> ((3-y),3)) pour water from the 4- gallon jug into the 3-gallon Ifx+y>=3andx>0 jug until the 3-gallon jug is full (xy) — («+y,0) pourall the water from the 3-gallon juginto Ifx+y<=4andy>0 the 3-gallon jug (%y) —> (0.x+y) pour all the water from the 4 -gallon jug into Ifx+y<=3.andx>0 the 3-gallon jug (0,2) —> (2,0) pourthe 2-gallon from the 3-gallon jug into the 4-gallon jugArtificial Intelligence (SPPU) 0 n One solution to the water jug problem. wy (0,0) (4,0) (0, 3) 43) oo 9) eof 43) 0,3) Le 4.0) 4.3) 0) 9) tf Naa 4% @3 oo 9 pe 43 01 (4,0) %9 a 49) 9.3) ao 0 Fig. 6.4.3 6.4.2 _ Classification of Planning with State Space Search ‘As the name suggests state space search planning techniques is based on the spatial searching. Planning with State Space Search Fig. 6.4.4 : Classification of Planning with State Space Search ‘+ Planning with state space search can be done by both forward and backward state-space search technique “Forward state-space search” is also called as “p Janner’. It is a determinist of actions s the initial state inWE Artificial tnteltigence (sPPU) A Planning Thus the prerequisite for this type of planning is to have initial world state information, details of the available actions ofthe agent, and description of the goal state, Remember that details of the available actions include preconditions and effects of that action. ig. 6.5.1 it gives a state-space graph of progression planner for a simple example where flight 1 is at location Aand flight 2 is also at location A. these flights are moving from location A to location B. In 1st case only flight 1 moves from location A to location B, so the resulting state shows that after performing that action flight 1 is at location B whereas flight 2 is at its original location - A. similarly In 2nd case only flight 2 moves from location A to location B and the resulting state shows that after performing that action flight 2 is at location B while flight 1 is at its original location ~ A. ‘ ne ‘ gO Take Fight 1 trom oe Location A to B oburh pugce , SS. caer Flight 1 at Location A Fight Location A ‘Take Flight 2 from Location Ato 8 Fig. 6.5.1: state ipace geiko RE It can be observed from the Fig. 6.5.1 that, rectangles show state of the flights (i.e. their current location), and lines give the corresponding actions from one state to another (Le. move from one: to: give the corresponding actions from one Sate 2 anower Note that the lines coming out of every state matches to all of the which can be accepted if the agents in that state. —————e Progression planner algorithm Pe Formulize the state space search problem : ‘Initial state is the first state of the planning problem which has a set of positive, the literals which don’t appear are considered as false. : ynditions are positive effect literals are added for that action else the negative Perform goal testing by checkingifhe stabawillsatisiy the goal Last eva the step cost for each action as 1,6-12 © — So basically we try to backtrack th the scenario and scenario and fing out the best possibility, in der to achieve the goal tg achieve this we have to See what might have been ight have been correct action at previous state. _ a . tn forward state space search we used to need information about the successors of the current state now, Ly backward state-space search we will need information about the predeces sors oF the current state, enn Here the problem is that there can be many possible goal states which are equally acceptable. That is why this approach is not considered as a practical approach when there are large numbers of states which satisfy the goal. Let us see flight example, here you can see that the goal state is fight 1 is at location B and flight 2 is also at location B, We can see in Fig. 6.6.1 that if this state is checked backwards we have two acceptable states in one state only flight 2 is at location B, but flight {is at location A and similarly in 2nd possible state flight 1 is already atlocation B, but flight 2 is at location A. ‘As we search backwards from goal state to initial state, we have to deal with partial information about the state, since we do not yet know what actions will get us to goal. This method is complex because we have to achieve a conjunction of goals. = ‘Artificial Intelligence (SPPU) Take Flight 7 from. Location 6 to Flight 1 at Location 8 Flight 2 at Location B Flight 1 at Location B Flight2 at Location A [*~ Take Flight 2 from Location to. Fig. 6.6.1: State-space graph of a regression planners * Inthis ig 66:1 rectangles are goals that must be achieved and lines shows the corresponding actions. Regression algorithm — 1, Firstly predecessors should be determined : * Todo this we need to find out which states will lead to the goal state after applying some actions on it * — We take conjunction of all such states and choose one action to achieve the goal state. © If we say that "X" action is relevant action for first conjunct then, only if pre-conditions are satisfied it sip pre-condi ions Saeed works, ‘+ Previous state is checked to see if the sub-goals are achieved. 2. Actions must be consistent it should not undo preferred literals. If there are positive effects of actions which appear in goal then they are deleted, Otherwise Each precondition literal of action is added, except it already appears. 3. Main advantage of this method is only relevant actions are taken into consideration, Compared to forward search, backward search method has much lower branching factor. 66.1 Heuristics for Planning * Progression or Regression is not very efficient with complex problems. They need good heuristic to better efficiency. Best solution is NP Hard ( (NP. stands fi for Non- -deterministic icandwa Intelligence (SPPU) 6-13 Planning, } + Therearet ‘wo ways to make state space search efficient: Use linear method : Add the steps which build on their immediate Successors or predecessors. Use partial planning method : As per the requirement at execution time ordering constraints are imposed onagent. a Q.__Explain Total order planning with le. We have seen in above section that forward and regression planners impose a total ordering on actions at all stages of the planning process, * Incase of Total Order Planning (TOP), we have to follow sequence of actions for the entire task at once and to do EOE for the entire task at once and to do this we can have multiple combinations of required actions. Here, we need to remember one most important ‘hing which should be taken care of, is that TOP should take care of preconditions while creating the sequence of actions, 7 For example, we cannot wear left shoe without wearing the left sock and we cannot wear the right shoe without wearing the right sock. So while creating the sequence of actions in total ordered planning, wearing left sock action should be executed before wearing the left shoe and wearing the right sock action should be executed before wearing the right shoe. As you can see in Fig. 6.7.1. e = expert: eee exon genuer ee ¢ tS Foe ate If there is a cycle of c0 (SPPU) art of Partial Ordered Planning (POP), In case 4 planning does ns is partial, Also partial ord Which action will come f the acti Wit peal actions which are placed in plan. With partial ordered planning, _problem_can_be decomposed, so it can work well in case The environment is non-cooperative. fake same, example, of wearing s! partial ordered planning. A partial order planning combines two action seq hoe to understand yences o Firstbranch covers left-sock and left-shoe. wes lef sno o Inthis case to wear a left shoe, wearing left sock is the precondition, similarly. ——— o Second branch covers right-sock and right-shoe. Se ee © Here, wearing a right sock is the precondition for the ht she Hope SL te Fig. 6.8.1 : Partial order planning of ‘Once these actions are taken we achieve our goal and Wearing Shoe reach the finish state. 6.8.1 POP asa Search Problem L ay a Define partial order planner. ifwe considered POP as a search problem, then we say that states are small plans. States are generally unfinished actions. Irwe take an empty plan then, it will consist of only starting and finishing s pepe ist tO el bn actions. “Every plan has four main components, which can be given as follows : Set of actions © These are the steps of a plan, Actions which can be performed in order to achieve goal are stored in set of actions component. © Forexample: Set of Actions = { Start, Rightsock, Rightshoe, Leftsock, Leftshoe, Finish} © Here, wearing left sock, wearing left shoe, wearing right sock, wearing right shoe are set of actions. Set of ordering constraints/ preconditions © Preconditions are considered as ordering constraints. (i. without performing action “x” we cannot perform action "y") eae © For example : Set of Ordering =(Right-sock < Right-shoe; Left-sock < Left-shoe} that is In order to wear shoe, first we should wear a sock. Z © So the ordering constraint can be Wear Left-sock < wear Left-shoe (Wearing Left-sock action should be taken before wearing Left-shoe) Or Wear right-sock < wear right-shoe (Wearing right-sock action should be taken before wearing right-shoe).WE Artificial intetigence (sPPU) 15 Planning © Mconstraints are cyelic then itrepresents inconsistency ——— nn presents inconsistency, o If we want to have a consistent plan then there should not be any cycle of preconditions. 3, Set of causal links © Action A achieves effect "B" for action B. (@) [ Action a, |}—Etect o) Buy Ate | —F Me _F onan] Fig, 6.8.2 : (a) Causal Link Partial Order Planning (b) Causal Link Example o From Fig, 6,8.2(b) you can understand that if you buy an apple it’s effect can be eating an apple and the precondition of eating an apple is cutting apple, There can be conflict if there is an action C that has an effect + E and, according to the ordering constraints it comes after action A and before action B. a vomes After aCHOn ea uaa ° © Say we don't want to eat an apple instead of that we want to make a decorative apple swan, This action can be between A and B and It does nothave effect "E". © For example : Set of Causal Links = {Right-sock->Right-sock-on — Right-shoe, Leftsock —> Leftsockon ~ Leftshoe, Rightshoe > Rightshoeon — Finish, leftshoe — leftshoeon — Finish }. © To havea consistent plan there should not be any conflicts withthe causal links. 4. _ Set of open preconditions © Preconditions are called open if t cannot be achieved by some acti strategy can be used by delaying the choice during search. © Tohavea consistent plan there should not be any open precondition 68.2 Consistent Plan isa Solution for POP ProblemOl 6-16 cial Intelligence (SPPU) Wr Arun complex actions, ¢ be deg Detween various states at different le “Is of the hierarchy. This Is called as operator expansion. * — Forexample move (x..2) pickup (x. y) putdown (x, 2) Fig. 6.9.1: Operator expansion el jon. Also you © Fig. 6.9.2 shows, how to create a hierarchical plan to travel from some source ta.2-destinat you can sequence of actions. ‘Travel (source, dest.) | observe, at every level how we follow some aoe Tako-rm Take-Tren Tako-b» Zoto (ails; Scarce). Buy-Tioket (te)’ '//* Catolr (ne) Leave (ren, dest ) Goto (counter) Request Pay (ticket) mn (tickat) Fig. 6.9.2 : Hierarchical planning example 1 POP One Level Planner Ifyou are planning to take a trip, then first you have to decide the location. To decide location we can earth for Various good locations from internet based ‘on, weather conditions, travellingéxpenses, etc. 1 wahoo © Say we select Rajasthan, with one level planner, first ‘we switch on PG, then we open browser, after that we open Indian Railways website booking site for ticket booking, then we enter the date, time, etc details to book the railway ticket. After that we will have to do hotel's ticket booking and so on. © This type of planning is called one level planning. If the type of problem is simple then we can make use. of one level planner. lanner. For complex problems, one level planner cannot provide good solution. ed intg more prirnitiveggians and it can be denoted with the help of linksArtificial Intelligence (SPPU) 617 + In terms of mor Ant HGS SEO, Riratehy of acibnd cin be dacsiod. iar activities would oreste seer ECDL eB ses. SF STSBS ae we Sein He eke Rech Rtn Sy and eying her nig Ra prac While take taxi to reach railway station, "9 — Have candle light dinner in Palace, Take photos, etc are the Minor activities. + Inreal world there can be complex problems. For example: A captain of a cricket team plans the order of 4 bowlers in 2 days fa test match(180 overs) Number of probabilities : 4180 = 1690, * Motivation behind this planning is to reduce the Size of search space. For plan ordering we have to try out a large number of possible plans. With plan hi primitive operators. lerarchies we have limited ways in which we can select and order In hierarchical planning major steps are given more importance. Once the major steps are decided we attempt to solve the minor detailed actions. : a It is possible ‘that major steps of plan may run into difficulties at a minor step of plan. In such case we need to return to the major step again to produce appropriately ordered sequence to devise the plan. 6.9.3 Planner 1. _ First identify a hierarchy of major conditions. 2. Construct a plan in levels (Major steps then minor steps), so we postpo 3. Patch major levels as detail actions become visible. 4. Finally demonstrate. Example Actions required for “Travelling to Rajasthan” can] * Opening yatra.com (1) * — Finding train (2) » © BuyTicket (3) © Gettaxi(2) + Reach railway station(3) + Pay-driver(1) © Checkin(1) Boarding train(2) Reach Rajasthan (3) _4 W Aruna ntigence (SPU) Fig. 6.9.4: Planner 1 level plan ~~ _ Buy Ticket (3), Reach Railway Station(3), Reach Rajasthan (3) 204 level plan ~~ Finding train (2), 3" Jevel plan (final) Buy ticket (3), Get taxi(2), Reach Railway station (3), Pay-driver(1), Opening yatra.com (1), Finding train (2), Check in(1), Boarding train(2), Reach Rajasthan (3). Buy ticket (3), Get taxi(2), Reach railway station (3), Boarding train(2), Reach Rajasthan (3).W Artificial intettigence (spPu) 19 Planning OT 76.10 Planni Language should be expressive enough to explain a wide variety of problems and restrictive enough to allow efficient algorithms 0 operate on it, * Planning Tanguages are knot \ction languages. Richard Fikes and Nils Nilsson developed an automated planner called STRIPS (Stanford Research Institute Problem Solver) in 1971, * Later on this name was given to a formal planning language. STRIPS is foundatio ost of the languages in order to express automated planning problem instances in currentuse... Sas ADL is an advancement of STRIPS. Pednault proposed ADL in 1987. Comparison between STRIPS and ADL Sr. ‘STRIPS language ADL No. ee) 1. | Only allows positive literals in the states, Can support both positive and negative literals. For example : A valid sentence in STRIPS is | For example : Same sentence is expressed as => Stupid A expressed as = Intelligent” Beautiful. 2u 2. | Makes use of closed-world assumption (Le. | Makes use of Open World Assumption (Le. unmentioned i Unmentioned literals are false) literals are unknown) 3. | We only can find ground literals in goals. We can find quantified variables in goals. For example : Intelligent a Beauti For example : 3x At (P1, x) A At(P2, x) is the goal of having as P1 and P2 in the same place in the example of the blocks 4. | Goals are conjunctions _ Goals may inv and. \ctions For example ; (Intelligent A Beautiful). al For example: (Intelligent A (Beautiful v Rich). 5. | Bffects are conjunctions Conditional effects are allowed: when P:E means E is an one pe 6. | Does not supportequality. Does not have support for typesGrab Z and Pickup 2 a Grab Y and Pickup Y 4 ThenStack Yon Grab X and Pickup X stack Xon ¥ i rie aed 4 i eS La a Fig. 6.10.1 Elementary problem is that framing problem in Al Is concerned with the question of what pce of nowt or information pertinent to the S808 Fozolve this problem we have tom make an Elementary ‘Assumption which is a Closed world assumption felt something is not asserted in the knowledge base then it is assumed to be false, this #5 ‘also called as “Negation by. failure”) “Standard sequence of actions can be given as for the block world problem : on(Y, table) on(Z, table) on(X, table) on(¥, Z) on(Z, X) on(X, Y) hand empty hand empty clear(Z) clear) clear(Y) Ri] a ‘Goal Fig. 6.10.2 We can write 4 main rules for the block world problem as follows = Rule | Precondition and Deletion List Rule1 | pickup() hand empty, on(X,table), holding0) clear(X) + Rule2 | putdown(X) holding(X) hand empty, on(Xtable), clear(X) Rule3 | stack(XY) holding(X), ss on(K¥), Rule4 | unstack(%Y) | on(XY), | clear)> W Artificial intelligence (SPPU) 6-21 Based on the above rules, plan for the block world problem : Start > goal can be specified as follows : 1. unstack(Z,X) 2. putdown(z) 3. pickup(Y) 4 stack(Y,Z) 5. pickup(X) 6. stack(X¥) * Execution of this plan can be done by making use of a data structure called "Triangular Table”. 1 | 0m(C.A)clear(c) | unstuck hand empty (CA) 2 holding (C) _| putdown (C) 4g | 07. table) hand empty | pickup (8) 4 clear (C) Holding | stack ® |e 5 | OMA table) clear (A) Hand | pickup empty _| (A) ‘i clear (B) | holding | stack (a) (A,B) i ‘on (C, table) on (B,C) on (A, B) clear (A) oO 1 x 3 4 5 6 Fig. 6.10.3 In a triangular table there are N+ 1 rows and columns. It can be seen from the Fig. 6.10.4 that rows have ‘1 n41 condition and for columns 0 —> n condition is followed. The first column of the triangular table indicates the starting state and the last row of the triangular table indicates the goal state. ‘With the help of triangular table a tree is formed as shown below to achieve the goal state.— Artificial Intelligence (SPPU) 6-22 Planning 6.10.2 “Seon Seve Try Problem Consider the problem of changing a flat tire. More precisely, the goal is to have a good spare tire properly mounted onto the ears axle, where the intial state has a lat trg on the axle and a good spare ire Tite rink To keep It simple, our version of the problem 55 esto th 0 8 eg AS er complications. the spare from the trunk, vests tire from the axle, pout the * “There are just four actions: removing spare on the axle, and leaving the car unattended overnight. We assume >that the car is in a particularly bad neighborhood, so that the effect of leaving it overnight is thatthe tires disappear. «The ADL description of the problem is shown, Notice that it is purely propositional. it goes beyond STRIPS in that it uses a negated precondition, ~At(Flat, Axle), for the PutOn(Spare, Axle) action. This s could be avoided by using Clear (Axle) instead, as we will see in the next example. Solution using STRIPS © Init (At (Flat, Axle) A At(Spare, Trunk) © Goal (At (Spare, Axle)) Action(Remove(Spare, Trunk), PRECOND: At (Spare, Trunk) © EFFECT: +At (Spare, Trunk) A At (Spare, Ground)) * Action(Remove(Flat, Axle), . PRECOND : At (Flat, Axle) © EFFECT: 7At (Flat, Axle) A At (Flat, Ground)) * Action(PutOn (Spare, Axle), * PRECOND : At (Spare, Ground) A~ At(Flat, Axle) © BFFECT:~At (Spare, Ground) A At(Spare, Axle)) © Action(LeaveOvernight) © PRECOND ERFECT::~At(Spare, Ground) A~ At(Spare, Axle) A~ At(Spare, Trunk) A> At(Flat Ground) A At(Flat, Axle)) g and Acting in Nondeterministic Domains wt 11_ Plan lanning that letely unobservable. The planning and In earlier sections, we have discussed yre not predictable and comp! ‘however ai ‘acting on real world problems require more sophisticated approach. f real world is uncertainty. In an uncertain environment, its Very One of the most important characteristic o| important for agent to rely upon its percepts (series of past experience) aoe © Whenever some unexpected. condition encountered, agent should refer its percept and should ‘action accordingly. In other words agent should be able to replace the current some other more suitable and reliabl reliable plan fsomething unexpected happens.Planning W Actin * Itshould be noted that real world itself is not uncertain but human perception related to world is uncertain. In artificial intelligence, ‘we try to give human perception ability to machine, and hence machine also,receives the perception of uncertainty about the real world, So machine has to deal with incomplete and incorrect information like human does. * Determining the condition of state depends on available knowledge. In real world, knowledge availability is always limited, so most of the time, conditions are non deterministic, + The amount or degree of indeterminacy depends upon the knowledge available. The inter determinacy is called "bounded indeterminacy” when actions can have unpredictable effects. ——— Intelligence (SPPU) 6-23 * _ Four planning strategies are there for handling indeterminacy : (Sensorless planning (ii) Conditional planning (ii) Execution monitoring and replanning —_(iv) Continuous planning (Sensorless planning Sensorless planning is also known as conformant planning. These kinds of planning are not based goany perception. The algorithm ensures that plan should reach its goal at any cost. (ii) Conditional planning Conditional plannings are sometimes termed as contingency planning and deals with bounded indeterminacy discussed earlier. Agent makes a plan, evaluate the plan and then execute it fully or partly depending on the condition. (ii) Execution monitoring and replanning In this kind of planning agent can employ any of the strategy of planning discussed earlier. Additionally it ‘observes the plan execution and ifneeded, replan it and again executes and observes, (iv) Continuous planning Continuous planning does not stop after performing action. It persist over time and keeps on planning on some predefined events. ‘These events include any type unexpected circumstance in environment. 6.12 Multi-Agent PlanningW Aruinciat Intelligence (SPPU) © Co-operation 'n Co-operation strategy agents have Joint goals and plans. Goals can be divid ‘combined to achi 4 Into sub goals but ui mat leve ultimate goal, =a (i) Multibody planning 6-24 . Multi body planning is the strategy of implementing correct joint plan, a Wi) Co-ordination mechanisms : 7" ‘hese strategles specify the co-ordination between Co-aperating agents, Cox i Several co-operating plannings. ; (iv) Competition 4 Competition strategies are used when agents are not co-operating but competing with each other, Every agent . ty Sve ROE Co-operatin 2B with each ott t wants to achieve the goal first. 6.13 Conditional Planning Solu 2 36 ' Im conditional planning we can check what is happening in the environment at predetermined points of the plan to deal with ambiguous actions,Artificial Intelligence (SPPU) 25 Planning * For a state node we have an option of choosing some actions, For a chance node agent has to handle every outcome, * Conditional Planning can also take a track on every state, Actions can on Place in the Partially Observable Environments (POE) where, we cannot keep be Uncertain because of the imperfect sensors, in vacuum agent example if the dirt is at Right and agent knows about Right, but not about Left: Then, cases Dirt might be left behind wh in such ‘en the agent, leaves a clean Square. Initial state is also called as a state set or a belief state, ie cig POS Si oe Fig. 6.13.2 Conditional Planning - vacuum world example (condition 2) 6.14.1 Job Shop Scheduling Problem * ob shop scheduling problem, also known as ideal oeKs Planning 26 ‘Artificial [ntetligence (SPUD basic version, the problem appears re are n jol ibs of varying sizes say Jt, Ja, yf» which are to be _. Mm. Every job Ji consists of k number of operations The aim is to find the yy machine is minimum and ata In its most In the job shop scheduling problem ther processed on m number of machines say Mi, Mi Ms. say, Ol, Ola, Oly, «Om and each operation has a processing time of P) sequence of operation for each machine such that the total time for an eration is processed on a machine. ° time only one 0} ig the three variables namely job, machines and the sm has number of variations in it, considerin o This proble objective function itself. Few of them are as follows. Variations based on machines : ndent. Machines can be related, indepet dependent setups. bs or no idle-time. ° Machines may have sequence ° 3 Machines may require a certain gap between Jo > Machines having any other limitations of time resources, etc. Variations based on different objective Function : > Tominimize the total time each machine needs, _ Tominimize tardiness, © Tomaximize lateness etc. 5 Itcan also be combination of multiple objectives leading to multi-objective optimization, problem Variations based on Jobs: > _ Jobs may have constraints or dependencies within themselves. Jobs and machines have mutual constraints, for example, certain Jobs can be scheduled on some machines only. 9 Jobs may have fixed or probabilistic processing time. Now a days, the problemiis presented 25,5) online problem (dynamic scheduling), that is, each job is ke a decision about that job before the next job is presented. presented, and the online algorithm needs to mal s it combines two major aspects 0} to both imy 1d use of planners in indust interest within Al as the two areas has led increase ‘great current ideas from ide in the last decade and ing combinatorial explosions: proach, but unfortunately, each conflict must be ‘GRAPHPLAN avoids these chotces during the tthout actually making a choice as t0 how to Planning is an area of. The cross-fertilization of several orders of magnitu is foremost an exercise in controll partial-order planners is a power resolved with a choice, and the choices graph construction phase, using ‘mutex links to reco! ful representational aP! can multiply exponentially. rd conflicts wil es so by using the general CNF form rather thana ed. Sometimes it is possible to solve & ed out, We say that a problem has achieve them in that 0 resolve them. SATPLAN represents a similar range of mutex relations, but d specific data structure. How swell tis works depends on the SAT solver Ut rroblem efficiently by recognizing that negative interactions can be rul v rializable subgoals there exists 2 “order of subgoals such that the planner can ‘vithout having to undo any of De previously achieved subgoals.Artificial Intelligence (SPPU) 6 7 Plann «For example, in the blocks world, if the Boal is to build a tower for eg, 1s on the Table, then the subgoals are serializable bottom to top: If we first achieve C on Table, we will never have to undo It while we are Achieving the other subgoals, A planner that uses the bottom-to-top trick can solve any problem In the blocks World domain without backtracking, although it might not always find the shortest plan. Aon B, which in turn is on C, which in turn + Acosely related method, used in the BLACKBOX system, and then extract a plan by using a SAT solver, This appl because the planning graph has already eliminated mat Italso works better than GRAPHPLAN, presumably greater flexibility than the strict backtracking sear such as GRAPHPLAN, SATPLAN, and BLACKBOX hat level of performance of planning systems and involved. These methods are, however, is to convert the planning graph into a CNF expression roach seems to work better than SATPLAN, presumably ny of the impossible states and actions from the problem. because a satisflability search such as WALKSAT has much ch that GRAPHPLAN uses. There is no doubt that planners ve moved the field of planning forward, both by raising the by clarifying the representational and combinatorial issues inherently propositional and thus are limited in the domains they can express. “S) GASA’_Limits of AL 1) _High Costs of Creation : As Al is updating every day the hardware and software need to get updated with time to meet the latest requirements, Machines need repairing and maintenance which need plenty of costs. It’ s creation requires huge costs as they are very complex machines, 2) Making Humans Lazy : AI is making humans lazy with its applications automating the majority of the work Humans tend to get addicted to these inventions which can cause a problem to future generations, 3) Unemployment As Al is replacing the majority of the repetitive tasks and other works with robotshuman interference is becoming less which will cause a major problem in the employment standards. Every organization is looking to replace the minimum qualified individuals with Al robots which can do similar work with more efficiency. 4) No Emotions : There is no doubt that machines are much better when it comes to working efficiently but they cannot replace the human connection that makes the team. Machines cannot develop a bond with humans which is an essential attribute when comes to Team Management. : Lacking Out of Box Thinking : Machines can perform only those tasks which they are designed or programmed to do, tend to crash or give irrelevant outputs which could be a major backdrop. ee Ethics oe asW Artificial Inceitigence (SPPU) 6-28 2. Inequality : How do we distribute the wealth created by machines ? The majority of companies are still dependent on hourly work when it comes to products and services, By ‘using artificial intelligence, a company can drastically cut down on relying on the human workforce, and this me” that revenues will go to fewer people. Consequently, Individuals who have ownership in Al-driven companies Be make all the money. * 3. Humanity : How do machines affect our behaviour and interaction ? Artificially intelligent bots are becoming better; better and better at modelling human conversation and relat in 2015, a bot named Eugene Goostman Won the Turing Challenge for the first time. In this challenge, human rate, used text input to chat with an unknown entity, then guessed whether they had been chatting with a human g- 5 machine, Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being. 4. Artificial stupidi Intelligence comes from learning, whether you're human or machine. Systems usually have a training phase jy which they “learn” to detect the right patterns and act according to their input. Once a system is fully trained, it cay then go into test phase, where itis hit with more examples and we see how it performs. 5. Racist robots : How do we eliminate AI bias? ‘Though artificial intelligence is capable of a speed and capacity of processing that's far beyond that of humans it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, as seen in Google’s Photos service, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to jow can we guard against mistakes? predict future criminals showed bias against black people. 6. Security : How do we keep Al safe from adversaries? ‘The more powerful a technology becomes, the more can it be used for nefarious reasons as well as good. This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to Al systems that can cause damage if used maliciously. Because these fights won't be fought on the battleground only, cybersecurity will become even more important. Afterall, we're dealing with a system that is faster and more capable than us by orders of magnitude. 7. Robot rights : How do we define the humane treatment of AI? While neuroscientists are still working on unlocking the secrets of conscious experience, we understand more about the basic mechanisms of reward and aversion. We share these mechanisms with even simple animals. In a way, we are building similar mechanisms of reward and aversion in systems of artificial intelligence. For example, Jnforcement learning is similar to training a dog: improved performance is reinforced with a virtual reward. z ‘These scenarios represent possible trajectories for humanity. None of them, though, is unambiguously achievable or desirable. And while there are elements of important agreement and consensus among the there are often revealing clashes, too. ;Invetigence (SPPU) Shared economic Prosperity 6:29 Licesatl! § ‘phe economic benefits of technological progress are widely shared around the world. The global economy is 10 times larger because Al has Massively boosted productivity. Humans can do more and achieve more by sharing this prosperity. This viston could be pursued by adopting various interventions, from introducing a global tax regime to improving insurance against unemployment. f 2, Realigned companies Large companies focus On developing Al that benefits humanity, and they do so without holding excessive economic or political power, This could be pursued by changing corporate ownership structures and updating antitrust policies. 3, Flexible labour markets Human creativity and hands-on support give people time to find new roles. People adapt to technological change and find work in newly created professions, Policies would focus on improving educational and retraining opportunities, as well as strengthening social safety nets for those who would otherwise be worse off due to automation. 4, Human-centric AI ¢ Society decides against excessive automation. Business leaders, computer scientists, and policymakers choose to develop technologies that increase rather than decrease the demand for workers. Incentives to develop human- centric Al would be strengthened and automation taxed where necessary. 5. Fulfilling jobs New jobs are ‘more fulfilling than those that came before.-Machines handle unsafe and boring tasks, while humans move into more productive, fulfilling, and flexible jobs with greater human interaction. Policies to achieve this include strengthening labour unions and increasing worker involvement on corporate boards. 6. Civic empowerment and human flourishing - In a world with less need to work and basic needs met by UBI, well-being increasingly comes from meaningful unpaid activities. People can engage in exploration, self-improvement, volunteering or whatever else they find _- Satisfying. Greater social engagement would be supported. en SaSTANCSpSREnts Alis a vast field for research and it has got applications in almost all possible domains. By keeping this in mind, components of Al can be identified as follows : (refer Fig, 6. Perception. Knowledge representationArtificial Intelligence (SPPU) \ Fig. 6.16.1: Components of AL i d the various In order to work in the environment, intelligent Selah the eed a Tee it Agent scans the environment using various sense organs ke camera, temperature sens cal called, Perception, After capturing various scenes, perceiver analyses the different objects in it and extracts their : features and relationships among them. 2. Knowledge representation The information obtained from environment through sensors may not be in the format required by the system, Hence, it need to be represented in standard formats for further Processing like learning various patterns, deducing inference, comparing with past objects, ete. There are various knowledge representation techniques like Prepositional logic and first order logic; ee tal actions and learns by itself. Itis also called as unsupervised l
You might also like
18AIL57 Manual
PDF
No ratings yet
18AIL57 Manual
28 pages
Tech Max
PDF
No ratings yet
Tech Max
116 pages
Page Replacement Algorithms
PDF
No ratings yet
Page Replacement Algorithms
10 pages
Subsets, Graph Coloring, Hamiltonian Cycles, Knapsack Problem. Traveling Salesperson Problem
PDF
No ratings yet
Subsets, Graph Coloring, Hamiltonian Cycles, Knapsack Problem. Traveling Salesperson Problem
22 pages
Sathish Yellanki: Skyess: in Association With
PDF
No ratings yet
Sathish Yellanki: Skyess: in Association With
12 pages
Data Science and Big Data Analytics-1-82
PDF
No ratings yet
Data Science and Big Data Analytics-1-82
82 pages
Aids I Book Sem 6
PDF
No ratings yet
Aids I Book Sem 6
223 pages
Mobile Computing Notes
PDF
100% (1)
Mobile Computing Notes
12 pages
Lab Manual: Department of Computer Engineering
PDF
No ratings yet
Lab Manual: Department of Computer Engineering
66 pages
8 Puzzle Pro Log
PDF
No ratings yet
8 Puzzle Pro Log
5 pages
Module 2 Principle of AI
PDF
No ratings yet
Module 2 Principle of AI
15 pages
Unit 4 - Domain Testing
PDF
100% (1)
Unit 4 - Domain Testing
76 pages
Data Science ML Full Stack Roadmap
PDF
No ratings yet
Data Science ML Full Stack Roadmap
35 pages
R22B Tech CSE (AIML) IandIIYearSyllabus PDF
PDF
No ratings yet
R22B Tech CSE (AIML) IandIIYearSyllabus PDF
65 pages
Cs8082 Machine Learning Techniques Ripped From Amazon Kindle e Books by Sai Seena
PDF
No ratings yet
Cs8082 Machine Learning Techniques Ripped From Amazon Kindle e Books by Sai Seena
148 pages
r20 4-1 Open Elective III Syllabus Final Ws
PDF
No ratings yet
r20 4-1 Open Elective III Syllabus Final Ws
29 pages
Final Lab Manual WEB
PDF
No ratings yet
Final Lab Manual WEB
62 pages
Chapter 3-Problem Solving by Searching Part 1
PDF
No ratings yet
Chapter 3-Problem Solving by Searching Part 1
80 pages
Mca 2020-2022 Syalbus
PDF
No ratings yet
Mca 2020-2022 Syalbus
64 pages
Mobile Application Development
PDF
No ratings yet
Mobile Application Development
193 pages
DMW eBook TechKnowledge
PDF
No ratings yet
DMW eBook TechKnowledge
216 pages
Siahaan V. Data Science Crash Course... With Python GUI 2ed 2023
PDF
No ratings yet
Siahaan V. Data Science Crash Course... With Python GUI 2ed 2023
610 pages
Ace - Toc PDF
PDF
No ratings yet
Ace - Toc PDF
56 pages
21CS63 - CG&FIP Course Material
PDF
No ratings yet
21CS63 - CG&FIP Course Material
151 pages
Big Data Analytics Lab Manual
PDF
No ratings yet
Big Data Analytics Lab Manual
90 pages
Detailed Curriculum PDF
PDF
No ratings yet
Detailed Curriculum PDF
6 pages
One Month Internship in DataScience With AIML
PDF
No ratings yet
One Month Internship in DataScience With AIML
3 pages
DSML Curriculum Doc - Google Sheets
PDF
0% (1)
DSML Curriculum Doc - Google Sheets
12 pages
Question Bank 1to11
PDF
No ratings yet
Question Bank 1to11
19 pages
Stqa Viva
PDF
No ratings yet
Stqa Viva
10 pages
Artificial Intelligence - AL3391 - Important Questions With Answer - Unit 2 - Problem Solving
PDF
No ratings yet
Artificial Intelligence - AL3391 - Important Questions With Answer - Unit 2 - Problem Solving
9 pages
KCA 303 Unit-2
PDF
No ratings yet
KCA 303 Unit-2
32 pages
Big Data Analysis - Lab Manual - Bharathidasan University - B.Sc Data Science, Second Year, 4th Semester
PDF
No ratings yet
Big Data Analysis - Lab Manual - Bharathidasan University - B.Sc Data Science, Second Year, 4th Semester
41 pages
ML Question Bank and Sol
PDF
No ratings yet
ML Question Bank and Sol
12 pages
ML Question Bank
PDF
No ratings yet
ML Question Bank
29 pages
20 431 Internship PPT Final
PDF
No ratings yet
20 431 Internship PPT Final
19 pages
DBMS Project Report
PDF
No ratings yet
DBMS Project Report
20 pages
CCS359 - Quantum Computing Manual(WOL)
PDF
No ratings yet
CCS359 - Quantum Computing Manual(WOL)
25 pages
UE20CS332 Unit2 Slides PDF
PDF
No ratings yet
UE20CS332 Unit2 Slides PDF
264 pages
JNTUGV B.tech R23 Course Structure
PDF
No ratings yet
JNTUGV B.tech R23 Course Structure
6 pages
CD Unit - 1
PDF
No ratings yet
CD Unit - 1
38 pages
Must Know Quantitative Aptitude Concepts For TCS Ninja: Faceprep - in
PDF
No ratings yet
Must Know Quantitative Aptitude Concepts For TCS Ninja: Faceprep - in
21 pages
Gate Cse Cao
PDF
100% (1)
Gate Cse Cao
108 pages
AI(3161608)_Question Bank GTU
PDF
No ratings yet
AI(3161608)_Question Bank GTU
5 pages
AL3391-AI Unit IV
PDF
No ratings yet
AL3391-AI Unit IV
65 pages
Data Structures Using C: Example 4.13
PDF
No ratings yet
Data Structures Using C: Example 4.13
5 pages
Planning and Search: Classical Planning: Planning Graphs, Graphplan
PDF
No ratings yet
Planning and Search: Classical Planning: Planning Graphs, Graphplan
22 pages
IBM - PBL Program 2025
PDF
No ratings yet
IBM - PBL Program 2025
2 pages
Question Bank: T.E. (Computer Engineering) Data Science and Big Data Analytics (2019 Pattern)
PDF
No ratings yet
Question Bank: T.E. (Computer Engineering) Data Science and Big Data Analytics (2019 Pattern)
4 pages
Video Summarization Project Presentaion
PDF
No ratings yet
Video Summarization Project Presentaion
34 pages
Neovarsity DSML Brochure
PDF
No ratings yet
Neovarsity DSML Brochure
7 pages
18CS42 Model Question Paper - 1 With Effect From 2019-20 (CBCS Scheme)
PDF
No ratings yet
18CS42 Model Question Paper - 1 With Effect From 2019-20 (CBCS Scheme)
3 pages
1 - 11 - TOC - Made Easy Class Latest Notes
PDF
No ratings yet
1 - 11 - TOC - Made Easy Class Latest Notes
270 pages
Complete Guide to HackWithInfy 2025 new one
PDF
No ratings yet
Complete Guide to HackWithInfy 2025 new one
18 pages
DBMS Unit 3
PDF
No ratings yet
DBMS Unit 3
98 pages
TRB Rejinpaul Question Papets
PDF
No ratings yet
TRB Rejinpaul Question Papets
12 pages
Question Bank COURSE: Artificial Intelligence Department: Cse Class: Iii B.Tech Sem Ii Year: 2009-2010 Unit I
PDF
No ratings yet
Question Bank COURSE: Artificial Intelligence Department: Cse Class: Iii B.Tech Sem Ii Year: 2009-2010 Unit I
9 pages
Lecture 1
PDF
No ratings yet
Lecture 1
43 pages
Course File Compiler Design
PDF
No ratings yet
Course File Compiler Design
41 pages
Ai Unit 05
PDF
No ratings yet
Ai Unit 05
50 pages