Bem Buitdery
Le
Use
efe| apy
bloodEf dey being apelin o>
P Westen «- ‘S
DA wBksbaben i Special tempter dengned de
dechtcal (Ki) Scankfic applteadtons jntended Pai wesily
de be Used by a Single Usee
D> fp deep hearatng htatabon i a aledicaked
Compukex ata. Supply Compute Tnkensiva, AT and
| Deep heasaing toe heeds Te offers Semipeantty
Me ghes Rufevrrance — fompavad te rodiftenel — uniitestabes
by, seu eaging. mutate pep hkcrt Procertiog Ce
(ates)
2 to Gospel: Posug tes) 1
D his. used fe handle qephs welated tobe Ake
gyothis, — Gbheds ond Widess . 4 ‘aeworkstation while saving $1500?
Bo ee ann oo m1
os Q coda
Companies like Lambda, Bizon, Digital Storm offer pre-built deep learning
GPU rigs that are often more expensive than building the rig from scratch.
However, the upside of purchase from these companies is the support and
pre-built software stack you receive, but if you don't need those, building the
rig from scratch saves you money. In this blog post, we aim to build the same
Vector GPU workstation but $1500 cheaper! The workstation is customizable
so for clarity, the following is its exact specification:
“The Vector rig rom Lamba abe wes et lt trom seatch, As we seethe pre-tax price ofthis machin is
'S7630 on 12/08/2021 whic SISI3 hohe than what we paid Neneggeam foralthe pce
‘The price of each hardware piece is listed below:
Power
Fan
Hardware lee pretax prices on Newegg.com purchased on 12/08/2021,
‘The following is the list of hardware pieces purchased with the link to the
‘Newegg product page. The assembly instructions are also described below.
+ Install the power supply on the computer case. The cable
‘management and packing of the wires are crucial here as the PSU has
lots of power wires,« Install the power supply on the computer case. The cable
management and packing of the wires are crucial here as the PSU has
lots of power wires.
Ite Gore 19-1000
ce
A
+ Install the motherboard on the computer case. Don't forget to attach
the port plate before installing,
Install the CPU on the motherboard. Don't touch the pins and make
sure there is no dirt. Apply the thermal paste on top of the chi
* Attach the CPU cooler to the CPU
+ Install both GPUs on the motherboard
Lian Dynamic Razor Com
@
EVGACEL Lint aoter
‘Slicon Pour STASSDNVMeEVGA. 1600W/ Powe Suna
+ Install the NVMe SSD on the motherboard
+ Install all four RAMs on the motherboard.
+ Connectall the power cables and computer case cables such as the
front power button and USB ports.
Notes:
1. Ifyou are building a 4-GPU rig, as each GPU takes about 250-350W,
you need to make sure total power is supported by your PSU and
outlet.
2, Make sure the inside temperature is good by reading the
temperatures of the motherboard, CPU, and GPU.
Photos during the build:
‘ter PSU instalation.Kw
wy Binary
2
>
essing Movie Revinsr 2
assrprcalton 1
Baw
4
mY dawtalrestion name ital Krcucing Hak woe hee
classify the data ity kine Coteniltan:s cdrel 2un
the data 8 we hate te thay ia take Lo
als
toe Gn tele ene ue Shady. he . claws ying Movie
Keun" Pr bis ue Col ue one dabareh je
DmoB — ( Trkernet Movies Oats Bove) By Revel op
tn dhis datesee te hae classy fhe
Movie reviews ins kee cobebins.EXPERIMENT NO -2
Design a neural network for classifying movie reviews (Binary
Classification) using IMDB dataset.
Procedure :
IMDB DataSet,
The IMDB (Internet Movie Database) dataset is a popular benchmark
dataset for sentiment analysis, which is the task of classifying text into
positive or negative categories. The dataset consists of 50,000 movie
reviews, where 25,000 are used for training and 25,000 are used for
testing, Each review is already preprocessed and encoded as a
sequence of integers, where each integer represents a word in the
review.
The goal of designing a neural network for binary classification of
movie reviews using the IMDB dataset is to build a model that can
classify a given movie review as either positive or negative based on
the sentiment expressed in the review.
# Import necessary libraries
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense, Dropout
from tensorflow.keras.preprocessing.sequence import pad_sequences
# Load the dataset
=" -
y C
(Ktrain, y_train), (X.test, y_test) = imdb load_data(num_words=10000)
In this step, we load the IMDB dataset using the imdb.load_data() function from
Keras. We set the num_words parameter to 10000 to limit the number of words in
each review to 10,000, which helps to reduce the dimensionality of the input data
and improve model performance.
# Preprocess the data
maxlen = 200
X.train = pad_sequences(X_train, maxlen=maxlen)
X.test = pad_sequences(X_test, maxlen=maxlen)
In this step, we preprocess the data by padding the sequences with zeros to a
maximum length of 200 using the pad_sequences() function from Keras. This
‘ensures that all input sequences have the same length and can be fed into the
neural network.
# Define the model
model = Sequential()# Define the model
model = Sequential()
model.add(Dense(128, activation='relu, input_shape=(maxlen,)))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu))
model.add(Dropout(0.5))
model.add(Dense(1, activation
igmoid’)
Inthis step, we define the neural network architecture using the Sequential()
class from Keras. Next, we define the neural network model with three fully
connected layers. The first layer has 128 units with ReLU activation, the second
layer has 64 units with ReLU activation, and the final layer has a single unit with
sigmoid activation for binary classification.
# Compile the model
‘model.compile(loss=binary_crossentrepy, optimizer='edam metries=[accuracy))
In this step, we compile the model using the compile() method from Keras. We
set the loss function to binary crass-entropy, which is appropriate for binary
classification problems. We use the adam optimizer and track the accuracy
metric during training.
# Train the model
model ft(X train, y_train, validation data=(X test, y_test), epochs=10,
batch size=128)
In this step, we train the model on the training data using the fit) method from.
Keras. We set the number of epochs to 10 and the batch size to 128. We also
pass in the test data as the validation data to monitor the performance of the
model on unseen data during training,
+# Evaluate the model on test data
scores = model.evaluate(X test, y_test, verbose=0)
print(“Accuracy: %.2f%%"% (scores{1]*100))
Finally, we can evaluate the performance of the model on the test data
using the evaluate() function from Keras.
SESEEMEP
ZEAE ISAC:
Epoch 1/10
BEEP eee eee ee ee
‘1s 6ms/step = loss: 18.6795 - accuracy: 0.5094 - val_loss;
ace
CAPERS Len
SERIES RE Ee
‘Is 5ms/step = loss:
EEE ES
PETA AC eee ee ie ee
i 0.6937 - val_accuracy: 0.5003}
Epoch 7/19
‘1s 6ms/step - loss: 1.1111 accuracy: 0.5014 - val_loss:
EEE eee ee ee Pe
CEPR
SERA Eee
PRS AC eee See a ens
PREMERA)
Srachess: fyi
A88 fy ST Mews coves =
Cet) Mat class cles ficalen 2
Dy, TU A alicbeetkiow ig the Prost ef aeirbjirg
the. dada ine une oh fives CA) mide chided
D> whe oe anon amt dala and daily
thee inke mde an aver cleus oy
=) €xample >EXPERIMENT NO -3
Design @ neural Network for classifying news wires (Multi clase classification)
using Reuters dataset,
Procedure
Reuters DataSet:
‘The Reuters dataset is a collection of newswite articles and their categories. I
consists of 17,228 newswire articles that are classified into 46 diferent topics or
categories, The goal of this tesk isto train 8 neural network to accurately classify
newswire articles into ther respective categories,
Input layer: This layer will take in the vectorized representation of the newis articles
inthe Reuters dataset,
Hidden layers: You can use one or more hidden layers with varying number of
neurons in each layer. You can experiment with the numberof layers and neurons to
find the optimal configuration for your specific problem.
Output layer: This layer will output a probability distribution over the possible
categories for each input news article. Since this is a multi-class classification
problem, you cen use a softmex ectivation function in the output layer to ensure that
the predicted probabilities sum to 1
import numpy es np
{rom tensorfiow-keras.datasets import reuters
{rom tensorfiow-keras.models import Sequential
{rom tensorflow-keras.layers import Dense, Dropout
from tensorflowkeras.utls import to_categorical
We will import all the necessary libraries for the model and We will use the Keras
rary to load the dataset and preprocess i,
# Load the Reuters dataset
(train, y_train), (test, ytest) = reuters load data(num_word:
The first step is to load the Reuters dataset and preproces:
also split the dataset into train and test sets.
In this step, we load the IMDB dataset using the reuters load_data() function from
Keras, We set the num_words parameter to 10000 to limit the number of words in
teach review to 10,000, which helps to reduce the dimensionality of the input data
and improve model performance.
}0000)
for training. We will
4 Vectorize the data using one-hot encoding
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for, sequence in enumerate(sequences):
resultali, sequence] = 7
return results
xAtrein = vectorize_sequences(x train)
x test = vectorize sequences(x test)xLtrain = vectorize_sequences(x_train)
x.test = vectorize_sequences(x test)
# Convert the labels to one-hot vectors
num_clesses = max(y_train) +1
y.train = to_categorical(y_train, num_classes)
y.test = to_categorical(y_test, num_classes)
# Define the neural network architecture
model = Sequential()
model.add(Dense(64, activatior
m9
elu!
\put_shape=(10000))))
ee
model.add(Dropout(0.5))
model.add(Dense(64, activation:
model.add(Dropout(0.5))
relu))
model.add(Dense(num_classes, activatio
oftmax))
The next step is to design the neural network architecture. For this task, we will use
a fully connected neural network with an input layer, multiple hidden layers, and an
‘output layer. We will use the Dense class in Keras to add the layers to our model.
Since we have 46 categories, the output layer will have 46 neurons, and we will use
the softmax activation function to ensure that the output of the model represents a
probability distribution over the 46 categories.
# Compile the mode!
model.compile(optimizer='adam’, los:
metrics=[accuracyl)
ategorical_crossentropy,
Once we have defined the model architecture, the next step is to compile the model.
‘We need to specify the loss function, optimizer, and evaluation metrics for the
‘model. Since this is a multi-class classification problem, we will use the
categorical_crossentropy loss function. We will use the adam optimizer and
accuracy as the evaluation metric.
# Train the model on the training set
history = model.fit(_train, y_train,
epochs=20,
batch_siz
12,
validation_dat:
(test, y_test))
After compiling the model, the next step is to train it on the training data. We will use
the fit method in Keras to train the model. We will also specify the validation data
and the batch size.
# Evaluate the model on the test set
test_loss, test_acc = model.evaluate(x_test, y_test)
print(Test accuracy: test.ace)
Evaluate the performance of the neural network on the validation set and tune the
hyperparameters such as learning rate, number of layers, number of neurons, etc,
based on the validation performance.
—# Evaluate the model on the test set
test_loss, test_acc = model.evaluate(x_test, y_test)
print(Test accuracy’, test_acc)
Evaluate the performance of the neural network on the validation set and tune the
hyperparameters such as learning rate, number of layers, number of neurons, etc.,
based on the validation performance.
DE
ag PEST
mecrones
SES
ERE
EL
OME
D
FEET
ESN
PE
ets
Ear
aay
erence
Seemee
SEG SER mn
cE SAE
SRE EAE
eee!
BREE
ED
5s: 0.8282. accuracy. 0.7931 -vaL 103s 1.0629)
$:07916
io
SATE
ae
SEE ETE
en
EEA 0.8127 -val
Bremres
(ene
Een
eae
ferme
Berne
iad
EET os
PEASE cE
cA
SETA
om
ae
END
90.1 -