0% found this document useful (0 votes)
43 views

Deep Learning Unit - 3

Uploaded by

Nen Manchodni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
43 views

Deep Learning Unit - 3

Uploaded by

Nen Manchodni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 31
Deep hearing Near Nebosbs AT Noedom of Newret alebope ea Tbedudion te Kevas : we. Kenos, - wy TJenssilors a atin’ and“ cwtie © | Selteg “p Deep Kearning wdlestaken _< “dasribying Movie Yeureg BSF Pony dentate Claasstaying News wy (sv Mautk clos — classibicabton | Neural nlebodts | IZ SS no) Araceng of pleusal. — Neboks 2- Pe = ae = i Anatomy uw 2 Sted ef the Shute, fk) Reseavrh abouk Someltiy ie Hore we have fe Shlyobeutk the Near af Nekodks — Lompletaly - { we , “i AT pohak ie Avitisetoh Ateunad Nekodts 7 x « i re How we bain the Neural” Qtebadla 7 ee . . 3 tow fe Reval the Penhrmae ok kleuval Ndsety 7 be yohak % Avkidal geal — ggetodes (ANN) 2 2 fykydol — Newval Needs (Ani) ave olyBithon based on brain funckon ord ave ed he model Cornplicated Pottkevns and — fovelove Taseus Thee are’ eae Ames in ihe Nebel Av hikechate, : Tage layer dQ. Hidden tye 3: udp Aaya Tapuk | > > culpa Tout 2 > uo mee oukpue Jaye. Bogut Aegon ae Hidden layer Ta ANN Sfecalle dhere ae § dew dy fs of Nebedls ic yoo) hau been basically . : \ Suv C Sagle Neural atetodk) RMS Math Neural AtetoR) > “ oeekpak leuyes pees Hidden loyet 2 only one hidden defer cit D. Mba aan ma Melon ee ‘hare WW crn Aayers sill be dine fy Mn: \2 “How woe ain “the Neusal Netodls 7 > Fev Teaings, a dee Neusal Nebodks ave often Use be techadgytes thee are’ [ Fonnand, — Propagaben a. Boek word Propagation. \ Forward Pela eal D In the meme ete Bard — Prepegaborn is ese a. do the — Proters woth in the How =o the deena Hike Linge oye hidden deugot onk otkpuk Jays Akording te dhe Lange voluug . 5 > : Oros a Te fur lege Hidden lay a outy uk heya 2 a es Forward Poe pos ation Fe Gate Petri 2 dn Bachrowrd Propagation we have be upd ake dhe vorighhs tain Raelinarde 7h the etrque valus ave Nek wseached ; in fencer Propagation “ “f Canal G cols er pe wy = 0°30 Wee 02 SS eS gisele: a) Taguk laiper’ Se Hidden lover |, Quite ley Cay ape ae + Reckeoord fro iogeker Hue *wy dtp 4b, fikvalsen fundsn Ye i Fe eae | 3 Hew Cvahuake she —Povfprmance oF Neural Nebodlta 9 | Se, es = = = i = > Jo Tmprove the — Pevpymance othe dep Netral NeboSlg ure hue fe pe thyo gh Abies. approaches they ave ; t Hyper Pore metic. biaieg a Opkmigakon a Pegaterigaken. Hofer Parwameter Duniey i > Hyfer favameler Turing Consiab — of foieg, a Sek of opbinad hefesparannetes Vadves AR ow a rece tohi he “pelypog the opti ged abit he ou Bckasel Ce rhenel) t Bast Roe Caren Nos of efechy [. fbizeken Kechnigues | a = p opkeniaatber Lechniqpes alas fens axe Yeagenti ble. + } © i peasy dessee and Provide mot aecavd x sul (ssible - i 2D Techniques Used tn thie opkmigakon are ¢- 1. bwadient escent 2 Shodhaske — adienk Descent (S40) j a i i 3B Wier batch — lechaake fader’ Daten ( MB- S40) | bradien: — Qeatent <- foe gee D A WredienE Dulenk fa an FRevatue abet fot Stands from a wandom — Poin on the funckon and “trauerses down 1 Slope in -Sleps UAKR tk weaches dower — foie ok khak Aunckon . hlobal Lee mininum Jmia (2) | I @ Tntredudien bo Keres z- i = ae ee pa) jae 1D Kenas Wy 0 high - kewl, deep heroaioge AT ctewelepach uv By boogie 4p Iemplementing —_nestwal nebotles kK M8 vorilien Tn Python and Used malce the ‘top lemenkakscn ok nessa] neboRlee cay Vk: abo Suppab srebkiple backend — Neural neboBte. — Compukettton Keres, is elabuely coy be keann fe He woth becauye Te” Peerides a father font Sad tmith © a high Leual of —abubracdkon coil Keung Khe often ob mukfle: bak treks fy Lamgutakion fopss PS] | Fensdflow vodterfo vo 2 why de we Need fevers ut wel ee Keres 98 an DPT that ons prodke do be soup fe dleawn fh pnple- fevos neg male bs be Simple Ve afters tonastenr € Single Aga's seduces the ackens — veguiveh ko implement — Lammen lade ond Explains — User ; Owed dleanly Pesto sting Kone in Reveg in Mes. this maak thak es dea, tan be —tenflemented ancl deflogest fn ChBba’ Kone “Kevos alte Provides a Ueriaty 4 ots deployment opkens ae on User pesly . Basic Example Using Keras Library Fotowing is a basic exanple lo domensrate how easy Isto ain a mael and do things fee eveluaSon, Prediction etc. Donet wary f you do not understand any ofthe stops described below. It ls meant enly for Introducing developrrent wit: Keras to you. We shall go in deep In our subsequent ierals, and also through ‘many examples to get experisein Keras. Import from Keras ‘Sequentalt Isa sirple model avallatle in Keras. tad layers one on another sequently, hence Sequensal ‘model. Fer layers we use Dense() wich takes number of nodes and actvaten type, from keras.rodels deport Sequential from keras-layers deport Dense Dataset We shall consider a sv fleas dataset Folowing Is sample of It containing tree cbservatens. 6,148,72,35,8,33.6,0.627,50,1 1,85,66,29,0,26.6,0.351, 31,0 8,183,64,0,0,23.3,0.672,32,2 First eight colurms ore feetures of an experiment while the let(inth) column Is oxtput abel ‘You can dourload the dataset rem here. sport nurpy F oad dataset dataset = nuspy.loadtet(“input-data.csv", deliester=",") © split into input (X) and output (¥) vardables In this exarrple, ve shal traln a binary classifier. Quputlabals are either 1 or 0. Mode! Now, wa dofine madel using Keras Sequental() and Densa() classes. wrodel = Sequential() rodel.add(Dense(10, input_dim=8, activatione'relu')) vrodel.add(Oense(s, activations'relu' )) rodel.add(Dense(1, activatione'stgeotd')) ‘The code Is simple and ensy to read, We created a Sequentel{) madet and added threo Dense() layers tlt ‘Tho frst Dense layer corelts of 10 nodes, each node receives input from eight Ipput nades and the acthaon Used for the node Is rets(recYfied Enear unl). The second layer has § nodes and the activation function used Is Felu. The tied layer is our cuput pode and has enly one node, whose activation Is sige, to output for 0. So, ‘pert frominput and cutpu, we have twolayers in between tem You can add some mare layers in between ‘wth diferent acthaion layers. The selection has to be done by considerna tee of data, and can also be done Complete Python Program — Keras Binary Classifier Consolidating all the above steps, we get the following python program. Python Program fron keras.models import Sequential fron keras.layers import Dense Amport nunpy # fix random seed for reproducibility numpy.random.seed(7) print(‘random seed set‘) # load dataset dataset = numpy.loadtxt("input-data.csv", delimiter=",") # split into input (X) and output (¥) variables X= aatasetl:, Y = dataset[:,8] print (‘input data loaded’) 8 # create model model = Sequential() model.add(Dense(10, input_dims8, activations'relu’)) model.add(Dense(5, activation='relu')) model.add(Dense(1, activation="signoid')) print(‘wodel created") # Compile rodel model.conpile(loss="binary_crossentropy", optimizer="adan", metrics=[‘accuracy"]) print (‘compiled’) # Fit the rodel model.fit(x, Y, epochs=150, batch_size=10) print(‘data fit to model’) # evaluate the model scores = model.evaluate(X, Y) print("\nks: %.2F8X" % (model.metrics_names[1], scores[1]*100)) Conclusion In this Keras Tutorial, we have learnt what Keras Is, Its features, Installation of Keras, its dependencies and how easy its to use Keras to build a model with the help of a basic binary classifier example. @ sheane = 5 wy lo ke “Theane fee Python Kitbsavy dhag tlle ss, Evoluate mathematical oferctions igi oe es onal ome So Effient ; TR % et used 0 banding deep heavsniag Posie Te w8ke mere chester on Apu [ Avaphi ca Pas erry Unit) radh Ahan Cpu- pie ‘install —-theane —heane wa Hython Key alwo frown ay fhe Gran ddacthes os deep eacenicg dibravieg 4g opel: eae esearch es TR Ba me yr manupulating. ancl onal aaing akcal ‘ Matthemake« Kpreiiong Pavkcatenty rahe \Yahied Exprasion Computations in hens oe boritten in Neuen py dike Sartor dese Learning Neaealey ery : As Yan Siicetty ak hah Speeds 3 ; Prompt: ae | Conder Tagtalt Cc analenda ome [ i) Pip insta theane Je a- BN sia So PULA «bie Seodeteg Usiog hears 2 ae im(de Libraries mgd theane . bom cthoans joapde kenga de Creakan kin Freaking Perk Scolara, Xe tem& . Scola cy Y= tes - ecalar tite oe Crean adden exgreuion Qe vay : | Ait 2 theann-. chinkon (cx.¥3.2) | HE poos We boty! ti faa Cire, Fi Prat cde ty? Sa (a9) | > stheane dibvany Parkreeuly used be mk chien eninera Cunto mekxces > Cnt (iRise (eppikve Peorkik) 2- hearin > Thi d see aha 1 fame 8k. deveboped | Micresoht Qeseavch Mictesept Coppi Tener dercvibeg ( -— ~ al Near. nebodks CSc Cer et oy Compudakorg Cue, Win «diverted perk ° > Compukatt=nal Nets de JeolRik Derrelopers a Wictesobt Research laibak welecse - 9 Bem Buitdery Le Use efe| apy blood Ef dey being apelin o> P Westen «- ‘S DA wBksbaben i Special tempter dengned de dechtcal (Ki) Scankfic applteadtons jntended Pai wesily de be Used by a Single Usee D> fp deep hearatng htatabon i a aledicaked Compukex ata. Supply Compute Tnkensiva, AT and | Deep heasaing toe heeds Te offers Semipeantty Me ghes Rufevrrance — fompavad te rodiftenel — uniitestabes by, seu eaging. mutate pep hkcrt Procertiog Ce (ates) 2 to Gospel: Posug tes) 1 D his. used fe handle qephs welated tobe Ake gyothis, — Gbheds ond Widess . 4 ‘ae workstation while saving $1500? Bo ee ann oo m1 os Q coda Companies like Lambda, Bizon, Digital Storm offer pre-built deep learning GPU rigs that are often more expensive than building the rig from scratch. However, the upside of purchase from these companies is the support and pre-built software stack you receive, but if you don't need those, building the rig from scratch saves you money. In this blog post, we aim to build the same Vector GPU workstation but $1500 cheaper! The workstation is customizable so for clarity, the following is its exact specification: “The Vector rig rom Lamba abe wes et lt trom seatch, As we seethe pre-tax price ofthis machin is 'S7630 on 12/08/2021 whic SISI3 hohe than what we paid Neneggeam foralthe pce ‘The price of each hardware piece is listed below: Power Fan Hardware lee pretax prices on Newegg.com purchased on 12/08/2021, ‘The following is the list of hardware pieces purchased with the link to the ‘Newegg product page. The assembly instructions are also described below. + Install the power supply on the computer case. The cable ‘management and packing of the wires are crucial here as the PSU has lots of power wires, « Install the power supply on the computer case. The cable management and packing of the wires are crucial here as the PSU has lots of power wires. Ite Gore 19-1000 ce A + Install the motherboard on the computer case. Don't forget to attach the port plate before installing, Install the CPU on the motherboard. Don't touch the pins and make sure there is no dirt. Apply the thermal paste on top of the chi * Attach the CPU cooler to the CPU + Install both GPUs on the motherboard Lian Dynamic Razor Com @ EVGACEL Lint aoter ‘Slicon Pour STASSDNVMe EVGA. 1600W/ Powe Suna + Install the NVMe SSD on the motherboard + Install all four RAMs on the motherboard. + Connectall the power cables and computer case cables such as the front power button and USB ports. Notes: 1. Ifyou are building a 4-GPU rig, as each GPU takes about 250-350W, you need to make sure total power is supported by your PSU and outlet. 2, Make sure the inside temperature is good by reading the temperatures of the motherboard, CPU, and GPU. Photos during the build: ‘ter PSU instalation. Kw wy Binary 2 > essing Movie Revinsr 2 assrprcalton 1 Baw 4 mY dawtalrestion name ital Krcucing Hak woe hee classify the data ity kine Coteniltan:s cdrel 2un the data 8 we hate te thay ia take Lo als toe Gn tele ene ue Shady. he . claws ying Movie Keun" Pr bis ue Col ue one dabareh je DmoB — ( Trkernet Movies Oats Bove) By Revel op tn dhis datesee te hae classy fhe Movie reviews ins kee cobebins. EXPERIMENT NO -2 Design a neural network for classifying movie reviews (Binary Classification) using IMDB dataset. Procedure : IMDB DataSet, The IMDB (Internet Movie Database) dataset is a popular benchmark dataset for sentiment analysis, which is the task of classifying text into positive or negative categories. The dataset consists of 50,000 movie reviews, where 25,000 are used for training and 25,000 are used for testing, Each review is already preprocessed and encoded as a sequence of integers, where each integer represents a word in the review. The goal of designing a neural network for binary classification of movie reviews using the IMDB dataset is to build a model that can classify a given movie review as either positive or negative based on the sentiment expressed in the review. # Import necessary libraries from keras.datasets import imdb from keras.models import Sequential from keras.layers import Dense, Dropout from tensorflow.keras.preprocessing.sequence import pad_sequences # Load the dataset =" - y C (Ktrain, y_train), (X.test, y_test) = imdb load_data(num_words=10000) In this step, we load the IMDB dataset using the imdb.load_data() function from Keras. We set the num_words parameter to 10000 to limit the number of words in each review to 10,000, which helps to reduce the dimensionality of the input data and improve model performance. # Preprocess the data maxlen = 200 X.train = pad_sequences(X_train, maxlen=maxlen) X.test = pad_sequences(X_test, maxlen=maxlen) In this step, we preprocess the data by padding the sequences with zeros to a maximum length of 200 using the pad_sequences() function from Keras. This ‘ensures that all input sequences have the same length and can be fed into the neural network. # Define the model model = Sequential() # Define the model model = Sequential() model.add(Dense(128, activation='relu, input_shape=(maxlen,))) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu)) model.add(Dropout(0.5)) model.add(Dense(1, activation igmoid’) Inthis step, we define the neural network architecture using the Sequential() class from Keras. Next, we define the neural network model with three fully connected layers. The first layer has 128 units with ReLU activation, the second layer has 64 units with ReLU activation, and the final layer has a single unit with sigmoid activation for binary classification. # Compile the model ‘model.compile(loss=binary_crossentrepy, optimizer='edam metries=[accuracy)) In this step, we compile the model using the compile() method from Keras. We set the loss function to binary crass-entropy, which is appropriate for binary classification problems. We use the adam optimizer and track the accuracy metric during training. # Train the model model ft(X train, y_train, validation data=(X test, y_test), epochs=10, batch size=128) In this step, we train the model on the training data using the fit) method from. Keras. We set the number of epochs to 10 and the batch size to 128. We also pass in the test data as the validation data to monitor the performance of the model on unseen data during training, +# Evaluate the model on test data scores = model.evaluate(X test, y_test, verbose=0) print(“Accuracy: %.2f%%"% (scores{1]*100)) Finally, we can evaluate the performance of the model on the test data using the evaluate() function from Keras. SESE EMEP ZEAE ISAC: Epoch 1/10 BEEP eee eee ee ee ‘1s 6ms/step = loss: 18.6795 - accuracy: 0.5094 - val_loss; ace CAPERS Len SERIES RE Ee ‘Is 5ms/step = loss: EEE ES PETA AC eee ee ie ee i 0.6937 - val_accuracy: 0.5003} Epoch 7/19 ‘1s 6ms/step - loss: 1.1111 accuracy: 0.5014 - val_loss: EEE eee ee ee Pe CEPR SERA Eee PRS AC eee See a ens PREMERA) Sra chess: fyi A88 fy ST Mews coves = Cet) Mat class cles ficalen 2 Dy, TU A alicbeetkiow ig the Prost ef aeirbjirg the. dada ine une oh fives CA) mide chided D> whe oe anon amt dala and daily thee inke mde an aver cleus oy =) €xample > EXPERIMENT NO -3 Design @ neural Network for classifying news wires (Multi clase classification) using Reuters dataset, Procedure Reuters DataSet: ‘The Reuters dataset is a collection of newswite articles and their categories. I consists of 17,228 newswire articles that are classified into 46 diferent topics or categories, The goal of this tesk isto train 8 neural network to accurately classify newswire articles into ther respective categories, Input layer: This layer will take in the vectorized representation of the newis articles inthe Reuters dataset, Hidden layers: You can use one or more hidden layers with varying number of neurons in each layer. You can experiment with the numberof layers and neurons to find the optimal configuration for your specific problem. Output layer: This layer will output a probability distribution over the possible categories for each input news article. Since this is a multi-class classification problem, you cen use a softmex ectivation function in the output layer to ensure that the predicted probabilities sum to 1 import numpy es np {rom tensorfiow-keras.datasets import reuters {rom tensorfiow-keras.models import Sequential {rom tensorflow-keras.layers import Dense, Dropout from tensorflowkeras.utls import to_categorical We will import all the necessary libraries for the model and We will use the Keras rary to load the dataset and preprocess i, # Load the Reuters dataset (train, y_train), (test, ytest) = reuters load data(num_word: The first step is to load the Reuters dataset and preproces: also split the dataset into train and test sets. In this step, we load the IMDB dataset using the reuters load_data() function from Keras, We set the num_words parameter to 10000 to limit the number of words in teach review to 10,000, which helps to reduce the dimensionality of the input data and improve model performance. }0000) for training. We will 4 Vectorize the data using one-hot encoding def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for, sequence in enumerate(sequences): resultali, sequence] = 7 return results xAtrein = vectorize_sequences(x train) x test = vectorize sequences(x test) xLtrain = vectorize_sequences(x_train) x.test = vectorize_sequences(x test) # Convert the labels to one-hot vectors num_clesses = max(y_train) +1 y.train = to_categorical(y_train, num_classes) y.test = to_categorical(y_test, num_classes) # Define the neural network architecture model = Sequential() model.add(Dense(64, activatior m9 elu! \put_shape=(10000)))) ee model.add(Dropout(0.5)) model.add(Dense(64, activation: model.add(Dropout(0.5)) relu)) model.add(Dense(num_classes, activatio oftmax)) The next step is to design the neural network architecture. For this task, we will use a fully connected neural network with an input layer, multiple hidden layers, and an ‘output layer. We will use the Dense class in Keras to add the layers to our model. Since we have 46 categories, the output layer will have 46 neurons, and we will use the softmax activation function to ensure that the output of the model represents a probability distribution over the 46 categories. # Compile the mode! model.compile(optimizer='adam’, los: metrics=[accuracyl) ategorical_crossentropy, Once we have defined the model architecture, the next step is to compile the model. ‘We need to specify the loss function, optimizer, and evaluation metrics for the ‘model. Since this is a multi-class classification problem, we will use the categorical_crossentropy loss function. We will use the adam optimizer and accuracy as the evaluation metric. # Train the model on the training set history = model.fit(_train, y_train, epochs=20, batch_siz 12, validation_dat: (test, y_test)) After compiling the model, the next step is to train it on the training data. We will use the fit method in Keras to train the model. We will also specify the validation data and the batch size. # Evaluate the model on the test set test_loss, test_acc = model.evaluate(x_test, y_test) print(Test accuracy: test.ace) Evaluate the performance of the neural network on the validation set and tune the hyperparameters such as learning rate, number of layers, number of neurons, etc, based on the validation performance. — # Evaluate the model on the test set test_loss, test_acc = model.evaluate(x_test, y_test) print(Test accuracy’, test_acc) Evaluate the performance of the neural network on the validation set and tune the hyperparameters such as learning rate, number of layers, number of neurons, etc., based on the validation performance. DE ag PEST mecrones SES ERE EL OME D FEET ESN PE ets Ear aay erence Seemee SEG SER mn cE SAE SRE EAE eee! BREE ED 5s: 0.8282. accuracy. 0.7931 -vaL 103s 1.0629) $:07916 io SATE ae SEE ETE en EEA 0.8127 -val Bremres (ene Een eae ferme Berne iad EET os PEASE cE cA SETA om ae END 90.1 -

You might also like