0% found this document useful (0 votes)
336 views

CODEMagazine 2022 JanuaryFebruary

Uploaded by

BoH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
336 views

CODEMagazine 2022 JanuaryFebruary

Uploaded by

BoH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Laravel, Git, Async, Minimal APIs, CSVHelper, JQuery

JAN
FEB
2022
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Kafka
Event-Streaming Platform

Managing CSV Exploring Going Knative


Files With Laravel: on Docker
CSVHelper Part 2 Containers
The Intersection of Technology

SCOTT GUTHRIE CHARLES LAMANNA SCOTT HUNTER SCOTT HANSELMAN


Executive Vice President, Corporate Vice President, Vice President Director, Partner Program Manager,
Cloud + AI Platform, Business Applications & Product Management, Azure, Microsoft
Microsoft Platform, Microsoft Microsoft

JEFF FRITZ KATHLEEN DOLLARD JOHN PAPA KARUANA GATIMU


Senior Program Manager, Principal Program Principal Developer Advocate Principal Manager, Customer Advocacy
Microsoft Manager, Microsoft Lead, Microsoft Group, Microsoft Teams Engineering,
Microsoft

DAN WAHLIN JEFF TEPER DAN HOLME PAUL YUKNEWICZ


Cloud Developer Advocate Corporate Vice President, Principal Product Manager Lead Product Manager,
Manager, Microsoft Microsoft Teams, Lead for Yammer, Microsoft Microsoft
Microsoft SharePoint,
Microsoft OneDrive, Microsoft and many more!
Featuring a bonus track dedicated to Microsoft Viva

April 5–7, 2022 Las Vegas, NV


Workshops April 3, 4, 8 MGM GRAND

Here are just a few of the topics covered in sessions and workshops:
.NET 6 • .NET MAUI • BLAZOR • C# 10 • ANGULAR • AZURE • AI
AZURE SQL • MODERN DATA • SQL SERVER • KUBERNETES • DEVOPS
PROJECT DESIGN & UI • SECURITY • MACHINE LEARNING • AZURE LOGIC
MICROSOFT POWER PLATFORM • MICROSOFT TEAMS • MICROSOFT SHAREPOINT

APRIL 2022 REGISTRATION


for all conferences Surface Go 3

When you REGISTER EARLY for a


WORKSHOP PACKAGE, you’ll
Surface
receive a choice of hardware or Earbuds
hotel gift card!
Go to DEVintersection.com or iPad Mini Surface
M365Conf.com for details. Headphones

DEVintersection.com AzureAIConf.com M365Conf.com


203-527-4160 m–f, 12-4 est
TABLE OF CONTENTS

Features
8 The Basics of Git 62 Minimal APIs in .NET 6
If you haven’t heard of Git, you’ve clearly been off the grid for a long Controller-based APIs have been around for a long time, but .NET 6
time. Sahil talks about this ubiquitous tool, and maybe shows you changes everything with a new option. Shawn shows you how it works.
something you didn’t know about it. Shawn Wildermuth
Sahil Malik

66 Simplest Thing Possible


18 E nhance Your MVC Applications John revives his old series with an interesting study of Tasks
Using JavaScript and jQuery: Part 3 so you can take your .NET feature to the next level.
John V. Petersen
Paul continues his series on how to make your MVC applications more
fun to build and more comfortable for your users.
Paul D. Sheriff 69  unning Serverless Functions
R
on Kubernetes
29 S oftware Development is Peter explains how to automate load balancing, scaling, and
Getting Harder, Not Easier more, using Kubernetes’ primitives and container technology.
Peter Mbanugo
Software development is complicated. You wouldn’t love it if it weren’t,
right? Mike talks about dealing with complexity like an old friend who’s
part of your projects.

Columns
Mike Yeager

32  eginner’s Guide to Deploying


B
PHP Laravel on the Google Cloud 74 CODA: On Plain Language
Platform: Part 2 John uses the Agile movement to explain why simple is better.
Bilal continues his series on the PHP Laravel framework by connecting the
John V. Petersen
app to a local MySQL database, involving the Google Cloud SQL service, and
then running a Laravel database migration from the Cloud Build workflow.
Bilal Haidar

48  orking with Apache Kafka


W Departments
in ASP.NET 6 Core
Kafka is an open-source high throughput, low latency messaging system
6 Editorial
for distributed applications. Joydip shows you how it’s what you’ve been
waiting for.
Joydip Kanjilal
25 Advertisers Index

57 T he Secrets of Manipulating 73 Code Compilers


CSV Files
Rod shows you that CSV is anything but old news.
Rod Paddock

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to [email protected] or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

Finding Inspiration
This editorial marks a huge milestone in my life: It officially marks the end of my first 20 years as editor in
chief of CODE Magazine. And before you get any ideas, this is NOT my last editorial! I have many more
years ahead of me to “entertain” you with my witty banter and deep knowledge of software engineering,

science, music, and Dungeons & Dragons. Some Glacial Rift of the Frost Giant Jarl” Yes that was the ing about it. Like all new writers, I got a TON of
of that statement is true. title and some 40 years later, I can still recite the rejection letters, but I persevered and had some
names of many of these modules. The names were minor successes. These successes drove me forward.
When I started thinking about this editorial, I re- epic. D&D was (and still is) is an amazing game and
viewed a bunch of my past editorials and was proud of it took me to many mythological as well as real-
what we’ve accomplished at CODE Magazine in the last world places. From Greek to Roman to Norse to Programming
20 years. It’s amazing how much things have changed Tolkien, every mythology was represented. As for I was determined to be a writer until I discovered
in that time. The early 2002 issues were all about this the real-world places, I met many other gamers in programming. Programming has always been fun
new “.NET Initiative,” Web Services, XML, and XSLT. high school (I was president of the Golden Dragon for me and when I discovered databases in college,
The cloud was non-existent at the time, there was no Club at one point) and at numerous conventions in I knew what I wanted to do for a career. I started in
Twitter, no Facebook—heck, Amazon’s primary busi- places like Los Angeles and Milwaukie. I was deep the DOS era and have continued to write code for
ness was selling books. How things have changed! into this game, and that inspired me to start writ- over 30 years now. I still find enjoyment in slinging
code to this day. Over the years, I’ve had
As some of you know, I was hired as EIC of the opportunity to work with some great
CODE Magazine via an instant messaging ses- developers and have grown to be a fairly
sion (ICQ I think) and, to be honest, this was a skilled programmer myself. But however
dream come true. From the time I was in high skilled I became, I missed writing. It was
school, I dreamt of being a writer. Did I want programming skills that eventually led me
to be a tech writer? Heck no! I wanted to be back to writing.
a Dungeons & Dragons writer. That dream was
partially filled in high school when I published
my first D&D article called “The Role of Taxes.” Writing
I was a geek then and I’m a true geek now. So, In 1992, I decided to see if I could get pub-
for those long-time readers (and new ones of lished in a computer magazine. I went to a
course), where am I going with this? Well, I software conference and proposed an idea
want to talk about inspiration. to Dian Schaffhauser who was an editor at
Database Advisor Magazine. She accepted
As I was thinking about the things that in- and I went to work writing my first article.
spired me and how to best represent them One article led to another, and another,
in this editorial, I decided on a picture. Like and another and eventually it led to writing
many geeks, I’ve spent decades collecting books and finally to being EIC of CODE Mag-
various geek trophies of things I enjoy. azine. I’ve never stopped writing. As a mat-
Figure 1 shows a bookshelf containing the ter of fact, I wrote an article for this issue!
many, many things that provide me with Writing has been a source of inspiration to,
comfort and inspiration. I’m going to high- well, keep writing. It’s a sickness, I think.
light a few of them. Talk to me about writing books some time.

Star Wars Finding YOUR Inspiration


I’ll never forget the opening credits of I described just a few items that inspire
Star Wars. The title card read: “A long time me. There are others: music, pop art, mov-
ago, in a galaxy far, far away….”. I fondly ies, and economics, to name a few. The
remember my eight-year-old self writing trick for you is to find what inspires you
“books” about Star Wars with an oversized and lean into it. Having sources of inspira-
pencil. Writing new adventures for Luke, tion is what makes our world go around. I
Leah, and Han were my bag. hope you find yours!

Dungeons & Dragons  Rod Paddock


I discovered Dungeons & Dragons when I 
was 12 years old. I remember getting my
first module on my 13th birthday called “The Figure 1: My collection of inspirational trophies

6 Editorial codemag.com
CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY

MORE THAN JUST


A MAGAZINE!
Does your development team lack skills or time to complete all your business-critical software projects?
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/code
832-717-4445 ext. 9 • [email protected]

codemag.com
ONLINE QUICK ID 2201021

The Basics of Git


I can’t think of any other skill besides Git that is universally applicable to any developer. It doesn’t matter if you write code in C#,
JavaScript, or Python, or for Windows or Mac or really anything else, there’s a solid chance that these days you use Git for source
control. And no, that doesn’t mean just GitHub. Git is an open standard backed by the Git community. Various other products,

such as Azure DevOps, Bitbucket, and Atlassian all support The opposite of centralized source control is decentralized
Git. First things first. I’m going to avoid the lightning rod source control, of which Git is an example. In decentralized
discussion of whether Git is a good product or not. The real- source control mechanism, you can have many locations
ity is that whether you like it or not, all of us use it. And with the source control repo. These locations can be servers,
let’s be honest: It has proven to be scalable enough for the or they can even be your own local hard disk, or they can
largest source code repositories, and it’s pretty easy to get be a coworker’s hard disk. You can merge changes between
started with, too. these multiple source control repos. Also, you don’t rely on
exclusive check-in and check-out anymore. Instead, you rely
Yet it’s one of those products that really drives me mad. So on merges and commits. This invariably has the downside of
Sahil Malik I thought it might be worth writing an article, explaining merge conflicts. Good coding patterns, good architectural
www.winsmarts.com the basics of Git. With a strong foundation, you can build practices, and writing good tests reduce this pain to some
@sahilmalik taller buildings. degree, although don’t eliminate it.
Sahil Malik is a Microsoft Let’s start learning Git.
MVP, INETA speaker, Centralized Source Control vs.
a .NET author, consultant, Decentralized Source Control
and trainer.
If you’ve worked with older versions of source control soft- Install Git
Sahil loves interacting with ware, such as Visual SourceSafe, Mercurial, PVCS, or many Many development tools, such as XCode, already come with
fellow geeks in real time. others before that, you’re familiar with centralized source Git packaged. Even if you already have Git on your computer,
His talks and trainings are control. In centralized source control, there’s a server in the it’s a good idea to update it. The instructions are unique
full of humor and practical middle that all developers talk to. Any software project is per operating system. Rather than rehashing instructions
nuggets. comprised of many files. If you wish to work on a certain file, here, I suggest that you visit https://git-scm.com/book/en/
you check out that file. While that file is checked out, its sta- v2/Getting-Started-Installing-Git and follow the instructions
His areas of expertise are tus is marked checked out in the centralized source control per your operating system and install Git on your computer.
cross-platform Mobile app repo. If any other developer wishes to overwrite that file,
development, Microsoft they’re unable to, because it’s checked out to you. You need Once you’ve installed it, you should be able to run the com-
anything, and security to check in your changes first, and the other developer’s mand “git” on terminal. For Windows, you’ll notice that af-
and identity.
changes are the other developer’s headache. The other de- ter installation, you get a special terminal called “Git bash”.
veloper must probably do a merge or something similar. All This is a special terminal/command window on Windows
of this works fine, but it has two main problems. that tries to emulate a Unix-like terminal. You’re welcome
to use it, although I’ve also used Git through the Power-
The first issue is what happens if that central server goes Shell window and never run into any issues. I do feel that
down. You can continue to work on the previous snapshot you should lean on a Unix-like terminal even on Windows,
you pulled from the server. But sooner or later, when you because a lot of commands invariably end up making use of
need to re-sync your changes to the server or check-in files, Unix-like commands intertwined with Git commands. Most
you hit a wall. You can’t for instance, continue working with devs mix and match them without even thinking about it.
source control locally. For that matter, you can’t work with
an alternate remote upstream location for the meantime,
such as a co-worker’s source control. And what if you want Configure Git
source control on just your computer, without any need to Before you can use Git, you have to do some basic configu-
share with rest of the world, for a pet project that’s complex ration. At the bare minimum, you’ll need to specify a name
enough to deem source control? and email—this is your information, who are you when you
issue a commit. Of course, the server-side repo also authen-
The second issue, of course, is scale. Centralized source con- ticates you through the various means that Git supports.
trol repos assume a small set of developers working very
closely together. These days, we all contribute to very large You can also optionally specify a default editor and I highly
source control repos, which are typical in popular open- recommend that you specify a line ending format as well.
source projects. A centralized source control mechanism
that relies on locking in a central location to talk with sim- Let’s perform this basic configuration on your computer.
ply doesn’t scale to the general complexity of large-scale
repos and a disconnected working model. When you perform Git configuration, you can do so at one of
three levels. You can do so at a global level, which affects all us-
Both of these issues are fuzzy in nature. Visual SourceSafe ers on your computer. You can do so in your user profile, in which
fans insist that there are workarounds to these problems. case it will affect all work on the user’s profile. Or you can specify
But just because you can row to Japan in a tiny boat doesn’t at a folder level, where you wish to have certain settings affect
mean it’s a good idea. To the rest of the world, it’s clear that only certain repos. These settings go in a hidden file called “.git-
we need a new approach, a decentralized source control. config”. My gitconfig looks like that shown in Figure 1.

8 The Basics of Git codemag.com


First let’s configure the username and email address. Here’s
how. Remember to use your own username and email address.

git config --global user.name "Sahil Malik"


git config --global user.email [email protected]

Next, let’s specify a default editor. By default, Git uses Vim. A lot
of people love Vim. Personally, I never have to restart my Mac
unless I’m trying to exit Vim. There are just too many damned
shortcut keys to remember. No, I don’t dislike it; in fact when
I am ssh’ed into a Docker container, using something such as
VSCode may not be an option. But I do find myself more produc-
tive in VSCode, so I’ll just set that as my default editor as follows.

git config --global core.editor "code --wait"

Of course, for the above to work, VSCode should be installed


and be in your path. Now, whenever you need to enter multi
line commit messages, or do stuff that requires any kind of
editing, VSCode pops up. Let’s try this. Run the below com-
mand to edit all your settings. Figure 1: VSCode as my default editor for Git.

git config --global -e

As you can see, VSCode pops open with your .gitconfig set-
tings. No longer do you have to remember the shortcut
“shift_ZZ” to save and exit, because this isn’t Vim.
Figure 2: Git with tab completion
Figure 1 shows my settings that have a few additional things I
haven’t talked about. Your settings file may look slightly different.
One other thing I highly recommend is that if you’re on a
Finally, let’s configure end of line settings. This is a very Mac or Linux environment, set up zsh with a theme called
important setting, so let’s understand what this is. On Win- “oh-my-zsh”. It makes great use of the Git plug-in and gives
dows, an end of line looks like this: you syntax highlighting on terminal and even tab comple-
tion. This can be seen in Figure 2. On Windows, you can
text\r\n either use the instructions at https://winsmarts.com/run-
ning-oh-my-zsh-on-windows-10-6fcb0fbc736b, you can set
On Mac/Linux, try this: up WSL2, or you can use posh-git.

text\n
Initialize a Git Repo
Notice the difference? Windows likes to use carriage return A Git repo, or repository for short, lives in a folder. Go ahead
and new line. The reasons for this are historical, and so deep- and create a new folder. I created one called “gitlearn”. To
ly rooted that Windows isn’t going to change. But this creates initialize a new empty repository in this folder, when inside
a big problem when some of your developer friends are on this folder in terminal, issue the following command:
Macs and you’re on Windows. In fact, when contributing to
OSS projects, this will invariably be the case. So as a best git init
practice, perform the following configuration on Windows,
This creates a new Git repository in this folder. Additionally, it
git config --global core.autocrlf true creates one branch in this empty repository. The name of the
branch by default is “master” although they let you configure
This will cause Git to strip out the /rs (carriage returns) it to use “main” by default if you prefer. You can choose to
when checking your files in. change the name of the initial default branch as follows:

git config --global init.defaultBranch "main"


Getting Help
There seem to be a lot of commands you need to remember What makes a folder a Git repository? Inside this folder is a
here. Luckily, there’s help. At any point, you can issue the hidden folder called “.git”. This folder is where Git likes to
following command to get help: store all its inner workings. If you delete this folder, it’s no
longer a Git repository. I advise you to not hand-edit stuff
git --help inside this folder. Leave it alone.

And if you wish to get specific help on a sub command, you


can issue a command like this: Commit Code
Let’s first understand the very basics of the Git workflow. A
git config --help typical project is comprised of multiple files and folders. In

codemag.com The Basics of Git 9


Figure 3: My prompt tells me that I have unstaged changes.

Now let’s also add a server briefly. When you add commit-
ted code to your local repo, you can choose to “push” to
an upstream location. That upstream location is the server.
During the push, you may have to resolve conflicts, merge
your team members’ code, etc.

Finally, you have an area that sits in the middle of the com-
mitted area, and the working area, which is the staging
area. The staging area is your “proposal to commit.” Think
of it as: Okay I’ve been working on stuff in my local area,
Figure 4: Git status tells me that I have an untracked file. and I’m ready to commit. And here is my proposal of what I
want to commit. For instance, I propose these three files to
be added, these two files to be renamed, etc. I “stage” those
changes in the staging area before I commit. And then I
Figure 5: I have staged, but not yet committed. commit.

Let’s see this in action. If you’ve been following this ar-


ticle sequentially, you should have an empty Git repo.
In this repo, go ahead and add a file like the next snip-
pet. Note that my commands shown here are for the *nix
shell, but you can extrapolate this on a Windows computer
also.

Figure 6: My first commit echo "first file" > readme.md

This command creates a file called “readme.md” with “first


Git, you have to think in three parts: local files, committed file” as the text contents. Note how my zsh prompt has also
files, and the staging area. changed in Figure 3. I find this color change a very conve-
nient mechanism to know that my tree is dirty.
Your local files are your work-in-progress, also known as
working copy. Here, you’re making things, breaking things, The yellowish color indicates that my repo has unstaged
and editing stuff, and edits don’t preserve history. Git, of changes. If I wish to see what the current status of my Git
course, tracks what files are being changed, added, or de- repo is, I can use the following command.
leted. But if you edit a file multiple times in a single commit,
to Git, it appears as one edited file, not many versions of git status
this edited file. This is work in progress after all.
In my case, it should produce output as shown in Figure 4.
Then you have committed files. For now, let’s not mix in
server and client. The cool thing about Git is that even your Git status tells me that I have an untracked file. That makes
local hard disk has commits. I find this very useful when I’m sense, as I just added a new file, but Git isn’t tracking it for
progressively building a project, and I commit as I go along. me—I haven’t ever added it to my repo. To add it, I need to
I can revert back to a commit if I mess up or find out where commit it. But before I can commit it, I need to stage it. To
I messed up using diffs etc., all without involving a server. stage all changes, I can issue the following command.
See how productive Git is? Even without an Internet connec-
tion, I have so much power at my fingertips. git add .

Figure 7: My code in the cloud

10 The Basics of Git codemag.com


Or alternatively, I can say git add and pass in the specific
files I wish to stage. Notice again, in Figure 5, how my
prompt has changed. ®

Now to commit, I simply issue the following command.

git commit

This should pop open your default editor, in my case VSCode, Instantly Search
Terabytes
to enter a commit message. I could also say Git commit
-m “message” to avoid opening the editor. I’ll enter some
message like “My first commit” and save to commit. Now my
prompt should change as shown in Figure 6.

Congratulations, you just did your first commit. Now try ex-
ecuting Git status again. It should tell you that your working
tree is clean. You can see your history of commits by execut-
ing the following command.
dtSearch’s document filters support:
git log • popular file types
At any point, I really encourage you to type -h in front of any • emails with multilevel attachments
command and examine what other options are supported.
• a wide variety of databases
That was fun, but there’s still so much more to learn: • web data
branches, server-based stuff, forking, merging. So stay the
course, young Padawan.

Push Code to a Server Over 25 search options including:


You’re a serious developer. You aren’t just writing this to • efficient multithreaded search
learn Git: You want to do some serious work. That means that
you need a server-based Git repo that can scale to the Inter- • easy multicolor hit-highlighting
net. An easy way to get such a repo, for free, is github.com. • forensics options like credit card search
Feel free to use any other product you wish. For my pur-
poses, I created a repo at https://github.com/maliksahil/
gitlearn. (This is a private repo, so you won’t be able to ac-
cess it. All of these repos are protected by permissions, so
you won’t be able to commit to mine, even if it were a public Developers:
repo. You should go and create your own.)
• SDKs for Windows, Linux, macOS
Now back to my local Git repo. I wish to push my local code • Cross-platform APIs cover C++, Java
into the server-side repo. How does my local repo know and recent .NET (through .NET 6)
where to push to? The answer is that I need to add a remote
origin, and here’s how you do it: • FAQs on faceted search, granular data
classification, Azure, AWS and more
git remote add origin
[email protected]:maliksahil/gitlearn.git

Now wait a second. Let’s unpack this a bit. What’s that funny
looking syntax? How did Git authenticate me?

First of all, Git remote lets me manage tracked remote re- Visit dtSearch.com for
positories. By saying Git remote add, I’m saying that I wish • hundreds of reviews and case studies
to add a remote tracked repository. The origin keyword in-
dicates that here is where this project was originally cloned • fully-functional enterprise and
from. You didn’t clone your project from the remote reposi- developer evaluations
tory, but you’re basically saying that in the process of set-
ting things up, in future, developers can clone from here.
The final parameter is the URL. The Smart Choice for Text
The way authentication works here is that by default, GitHub
Retrieval® since 1991
uses username password. But that’s neither secure nor
manageable. So it also supports ssh, which is what I have dtSearch.com 1-800-IT-FINDS
set up on my computer. Finally, you can use credential help-
ers to use alternate mechanisms of authentication as well.

codemag.com The Basics of Git 11


leases. Although nothing stops you from making changes to
“main”, it’s generally considered a bad idea. Most real-world
repositories set a policy on the Git repo to prevent making
Figure 8: Two changes are ready to go. changes to the “main” branch. The idea is that you create an
issue. Then you discuss what you wish to do on that issue.
You associate the issue with the files you’re changing and
you create a separate branch for your changes.

Aha! You’re into branches now. What is a branch? For now,


just think of it as a copy you’ve made of your code. I’ll get
into this in a minute.
Figure 9: I have no upstream branch.
You create a branch and you make your changes there. And
then you “merge” your changes into “main” via a pull request.

Gosh that’s a mouthful. Before I get any more confused,


let’s see this in action.

First, in my local repo, let’s create a branch.

git branch newchange

This command has now effectively given you a copy of


Figure 10: I have branches. “main”. Don’t worry, it isn’t literally a full copy. Git is smart
enough to abstract the details only for changes. For you, it
feels like a copy. Before you can start working on this copy,
you need to check out this branch, as follows:

git checkout newchange

I could have also abbreviated the above two commands into


one, effectively saying “create a branch and check it out”
like this:

git checkout -b newchange


Figure 11: Merging branches
Alternatively, I could create a branch in the server-side repo
Go ahead and execute the Git remote add origin command, and pull the changes using Git pull etc. That would be very use-
SPONSORED SIDEBAR:
as shown in the last snippet. ful if your coworkers have created a branch that they’ve pushed
Ready to Modernize a to the server and you wish to work on it collaboratively.
Legacy App? Next, you need to set the upstream branch. I haven’t yet
had a chance to talk about branches, but since you have At any point, you can run “git branch” to verify which
Need FREE advice on only branch “main”, that will be your upstream branch. The branch you’re on and which branches are available locally.
migrating yesterday’s idea is that an upstream branch is what’s tracked on the re-
legacy applications mote repository by your local branch. My local “main” needs Now that you’ve checked out the newchange branch, let’s make
to today’s modern to mirror the server side “main”, so my “main” branch is a some changes. Modify the first file by appending some text.
platforms? Get answers great choice for an upstream branch. To set the upstream,
by taking advantage and to push my “main” into “origin”, I use the following echo "more changes" >> readme.md
of CODE Consulting’s command:
years of experience by And create a new file.
contacting us today
git push -u origin main
to schedule your free hour
echo "a new file" > secondfile.md
of CODE consulting call.
Now visit the GitHub repo in your browser and your code
No strings. No commitment.
Nothing to buy. should be visible, as shown in Figure 7. Now, you should have to changes ready to go, as can be seen
For more information, in Figure 8.
visit www.codemag.com/
consulting or email us at
Modify Code Now let’s stage these changes, commit them locally, and
[email protected]. Okay, at this point, you should have a server-side repo at then push them into the cloud.
https://github.com/maliksahil/gitlearn and a locally cloned
repo. Assuming that you don’t have a locally cloned repo, First, stage:
you can use “git clone” to clone the repo from the server
location. You know you can pass the --help parameter to git add .
any command, right? Try doing a git clone yourself.
Then commit:
Now I wish to make changes. A typical software project
contains a number of files. You also go through many re- git commit -m "My second commit"

12 The Basics of Git codemag.com


Look at you, issuing Git commands like a pro. I’m so proud.
Now let’s push it to the cloud.

git push

The last command didn’t work. You should see an error, as


shown in Figure 9.

This makes sense if you think about it. I never told my Git
repo which upstream location “newchange” should be sent
to. And it gives me a helpful command to fix it. So go ahead
and run that command, which then sets the upstream loca-
tion and pushes my changes.

git push -u origin newchange Figure 12: Creating a PR

Oh yes: -u is a shorthand for --set-upstream

This is where the fun starts. Observe Figure 10.

See, in Figure 10, I effectively now have multiple versions


of my code base. Isn’t this great? I can now revert back to a
production version in “main” while switching to a dev ver-
sion in “newchange”.

This brings up a question. How do I get my changes from


“newchange” into “main”? There are two ways.

First, you can do a pull request. This means, you go to the


Git repo and issue a PR (short for pull request). This is you
asking, hey, I would like to merge these changes into main,
and usually you’d also have some reviewer on the PR. The
idea is that you don’t have permissions to merge into main,
or, as a policy, you wish to have an extra set of eyes look at
your code. It’s possible to set these policies on your repo,
and most real-world projects have such policies.

The other mechanism is that you can merge from new-


change into main and then push main to an upstream lo-
cation. This is where, typically, you have both branches
under your control. For instance, perhaps you have cre- Figure 13: Merge a PR
ated a branch of a branch, but both branches are your
dev work.
Move, Rename, or Delete Files
git merge newchange main I’ll keep this section short because Git automates this nice-
ly. And to save time and ink, I’ll do everything in the “main”
This workflow can be seen in Figure 11. branch. Go ahead and perform the following changes to
your repo.
You can push this and your remote repo will reflect these
changes. But I have other plans. Let’s use the PR method. mkdir afolder
Visit your GitHub repo, and you should now see a nice help- mv secondfile.md afolder
ful message, as shown in Figure 12. mv readme.md dontreadme.md

Use that “Compare & pull request” button to create a PR. This You created a new folder and moved the secondfile.md into
gives you a nice overview of the changes, the comments, files that folder, effectively deleting it from the root folder and
committed, approvers, labels, etc., which is great for a devel- adding a new file in afolder. Then you renamed readme.md
opment workflow. You can also set up bots to do some basic to dontreadme.md.
review for you, and all sorts of other automation involving
humans. When you’re done, you can merge the pull request, Now, running a Git status basically tells me everything that
and delete the branch, as shown in Figure 13. I just did. You can see this in Figure 14.

Now you’ve merged the PR, deleted the branch, and your And now I can stage, commit, and push my changes as follows.
changes are in main. You can feel free to also delete your
local “newchange” branch as follows: git add .
git commit -m "More changes"
git branch -d newchange git push

codemag.com The Basics of Git 13


You just made some amazing changes and pushed them to project. Your dev tools or development environment need these
the remote repo. Best of all, you did so using the concepts files, but they’re downloaded or generated on the fly. Some-
you have learned so far. I encourage you to repeat these times they’re even specific to the operating system you’re work-
changes using a branch and do a pull request to solidify ing on. Or perhaps you have configuration files with secrets or
your knowledge. keys specific to the developer’s environment. There are many
situations where you want certain files to not be checked in.
Ignore Files Let’s understand how you can teach Git to ignore files.
In any development project, you’ll have files that you don’t
wish to check-in as a part of your source code. These may be If you’ve been following this article, you should have a Git
node_modules in a node project, or bin, obj folders in a .NET repo that looks like Figure 15.

What I wish to do now is instruct Git to ignore the “afolder”


contents going forward. Also, I’m going to create another file
in the root of my repo. Let’s call it env.txt, and I don‘t want to
check it in. To save time, I’ll do stuff in the main branch, al-
though in real-world scenarios, you want to branch and merge.

First, let’s create the env.txt file as follows:

echo 'someconfig' > env.txt

To instruct Git to ignore the env.txt and afolder folder, I’ll


create a new file in the root of my repo called “.gitignore”.
You can also choose to create a .gitignore file per folder
and have those settings apply only to that folder and its
children.

Figure 14: The Git status for bunch of stuff I just did. In my .gitignore file, I choose to put the following text:

env.txt
afolder/

At this point, I’m going to add, commit, and push. Now let’s
visit my repository on github.com and examine what it looks
like. This can be seen in Figure 16.

Are you surprised by what you see in Figure 16? I do see


Figure 15: My Git repo so far that env.txt was ignored. But why is afolder still there? It’s

Figure 16: My Git repo with .gitignore

Figure 17: The working tree remains clean even after new files are added in ignored folders.

Figure 18: Making changes in an already tracked file in an ignored folder

14 The Basics of Git codemag.com


still there because you’re ignoring it going forward. This Git to ignore afolder and env.txt, but afolder\secondfile.md
means that now if you were to put another file under afolder is already being tracked. The afolder\anewfile.md was cre-
and try to check it in, that new file won’t be checked in. ated after the gitignore file was created, so it’s not being
tracked.
You can see this in action in Figure 17. Notice that the work-
ing tree remains clean, no matter what I do in afolder. Or is it? Next, I ask Git to remove files from the index recursively us-
ing the “-r” option, but using the --cached option, I instruct
Let’s append some text in the afolder/secondfile.md file— Git to remove files only from the index but leave the working
remember that the secondfile.md file is checked into Git tree alone. Long story short, this means: Leave my local
already. This can be seen in Figure 18. files alone but fix the Git repo.

Interestingly, now my working folder is no longer clean— Running this command informs me of the changes Git made.
even though I made the change in a file that resides in a It didn’t remove the files from my disk though. To fully un-
folder that I’ve instructed to be ignored. This is because the derstand the changes, I then run a Git status command,
file “secondfile.md” was already being tracked. which tells me that it deleted .gitignore, but not really—it
now shows .gitignore as untracked. This means that now
Why is this useful? It’s useful for configuration settings, when I do a “git add .”, those untracked files will now be
such as web.config or .env files. It’s quite normal for de- tracked. But you know what won’t be tracked? The afolder/
velopers to check-in an .env.sample file instructing other secondfile.md won’t be tracked going forward.
developers who clone the repository to follow the structure
of .env.sample when they create their own .env files. This achieves my goal of telling Git, hey, really, stop tracking
this entire folder, just like my .gitignore instructs you to do.
This .env file is instructed to be ignored from the get go, but
the .env.sample file is not. This means that I can continue To put things simply, simply adding a .gitignore won’t cause
to maintain .env.sample and keep my instructions updated, Git to stop tracking files that are already being tracked
while the .env remains safely out of source control.

But this behavior can also be problematic sometimes. Let’s


say that you forgot to include an auto-generated folder
such as node_modules in the first check-in. How do you
now instruct Git to not track this folder going forward, even
though you checked it in once?

First let’s reset my repo to what’s checked into remote.

git reset --hard && git pull

This command discards all of my changes and refreshes my


local working copy from the remote location, just to make
sure I have my teammates’ changes on my disk.

Now I wish to instruct Git to stop tracking “afolder” but


leave my working copy of afolder alone. This entire sequence
can be seen in Figure 19.

There’s a lot going on in Figure 19, so let’s break it down


step by step.

First, using the tree command, I show the current structure


of my working tree. Note that my gitignore has instructed Figure 19: Remove files you wish not to track

Figure 20: Git repo with my cleaned-up index

codemag.com The Basics of Git 15


but match the .gitignore spec. This is by design. To actu- (mostly ignored by gitignore). What I wish to do is add
ally untrack files, you also need to remove them from the some text to dontreadme.md and create a new file called
index. readme.md.

Now, go ahead and do a add, commit, and push. Now your echo 'even more stuff' >> dontreadme.md
Git repo should look like Figure 20. echo 'brand new file' > readme.md

You can imagine that “afolder” could be something like You can now run Git status to see what has changed. Here’s
node_modules, or something that you actually wanted to a trick. The output of Git status can be quite wordy. If you
get rid of. want to see a quick shorthand output, which may be useful
when you have a lot of files, use the following command:
Diff git status -s
When you’re working on a software project, you’re editing
files. This is your source code, and you need plenty of things The output of this command looks like this:
to help you keep control of what’s being committed. You’ve
already seen a Git command called Git status that lets you M dontreadme.md
do this at file level. But what about changes inside a file? ?? readme.md
Perhaps you want a good way to compare two versions of a
file and get a clear idea of what changes will be made if you This output tells you that the dontreadme.md file has been
push your changes. modified. But the readme.md file is unstaged.

Let’s understand this with an example. The current state of Now, go ahead and stage dontreadme.md.
my Git repo is shown in Figure 21.
git add dontreadme.md
As can be seen in Figure 21, I have one file in the root
called “dontreadme.md”, and a few other files and folders Run a Git status -s again. The output in text remains the
same, but notice closely that the “M” by dontreadme.md has
changed from red to green.
Listing 1: Output of git diff
diff --git a/dontreadme.md b/dontreadme.md Now append some more text to dontreadme.md. Don’t stage
index b84a4f4..d41a9fb 100644 this newly appended content and run Git status -s again. This
--- a/dontreadme.md time you’ll see that dontreadme.md status now says “MM”, one
+++ b/dontreadme.md M is green, and the other is red. This can be seen in Figure 22.
@@ -1,3 +1,4 @@
first file
more changes So now you have some content in remote, some content
even more stuff staged, and some in working copy, and all this content is
+so much stuff slightly different from each other.

How do you get ahead of the differences between these


three? The magic command is:

git diff

The output of this command can be seen in Listing 1. Let’s


be honest: This output is quite cryptic. Let’s try to under-
stand what this output means. This command compares your
working copy to staged changes.

• The a/ and b/ are directories—not real directories, but


a way to show you that a/ is index, and b/ is the work-
Figure 21: My Git repo’s starting point ing directory.

Figure 22: Git status -s.

16 The Basics of Git codemag.com


• The IDs you see after that (b84a4f4) are BLOB IDs of
the files mentioned.
• The 100644 you see is “mode bits”, telling you that
this isn’t an executable file or a symlink; it’s just a
text file.
• The ---a/ +++b/ you see on the next line is interest-
ing. The minus signs show lines in the a/ version but
they are missing from the b/ version. And the plus
signs show added lines in b/.
• The next line starting with @@ is also interesting.
The changes are summarized as chunks, and here you
have one chunk. This @@ line is the header of this
chunk. It’s telling you that starting at the first line,
you have three files from the a/. And starting at line
1, you have four lines extracted.

It tries to color code it, but the color coding can frequently
get messed up over ssh sessions or your local settings.

If you want to compare staged with remote, you simply use Figure 23: Here, I’m diffing in VSCode like a champ.
this command:

git diff --staged Summary


Phew! This works, but it’s tedious. Is there a better way? Git is an incredibly important skill. And let’s be honest:
There’s a learning curve here. When I started writing this
article, I thought I’d cover a bunch of interesting stuff that
Use VSCode as a Diffing Tool some developers consider advanced, such as merging, fork-
In the real world, you use Git diffing tools. You can use any ing, concurrent developers working, and resolving merge
tool that supports diffing—cross-platform tools such as conflicts. Those are skills that you’ll need and use daily in
KDiff3, P4Merge, or, for Windows, you can use WinMerge. Per- a typical developer’s workday. But as I started writing this
sonally, I prefer to use VSCode—it’s a pretty nice diffing tool. article, I realized how much knowledge and how many nu-
ances I take for granted, and before I knew it, this article
Most modern tools support this out of the box. You simply started getting longer than I had anticipated.
open a Git repo in VSCode and VSCode starts leveraging the
output of Git behind the scenes to give you a visual Git ex- Even to cover the basics of Git and tie it to practical real-
perience. You can completely integrate diffing right inside world situations, there’s an unsaid skill, assumed knowl-
of VSCode. Here is how. edge, that can be frustrating to discover.

First, instruct VSCode to act as the diffing tool for Git. To do I’m really curious to know: Do you consider yourself to be a
so, edit your gitconfig file: seasoned developer? Do you use Git regularly? Even in these
ultra-basic commands around Git usage, did you discover
git config --global -e anything new? What complex Git situations would you like
to see broken down in future articles? Do let me know.
Once your .gitconfig opens in your configured editor, add
the following lines at the bottom of it: git commit -m “That’s a wrap” && git push.

[diff]  Sahil Malik


tool = vscode 
[difftool "vscode"]
cmd = "code --wait --diff $LOCAL $REMOTE"

Now, instead of saying “git diff”, run “git difftool”. It should


show you an output like this:

Viewing (1/1): 'dontreadme.md'


Launch 'vscode' [Y/n]?

If you hit “Y” on that prompt, it opens VSCode, which then takes
care of showing you the diff. This can be seen in Figure 23.

You can also try “git difftool --staged” to view the staged diff.

This is a more visual diff. Using VSCode is so much easier to


understand than viewing the ASCII wall that git diff threw
at me.

codemag.com The Basics of Git 17


ONLINE QUICK ID 2201031

Enhance Your MVC Applications


Using JavaScript and jQuery: Part 3
This article continues my series on how to enhance the user experience (UX) of your MVC applications, and how to make them
faster. In the first article, entitled Enhance Your MVC Applications Using JavaScript and jQuery: Part 1 (https://www.codemag.com/
Article/2109031/Enhance-Your-MVC-Applications-Using-JavaScript-and-jQuery-Part-1), and the second article, entitled Enhance

Your MVC Applications Using JavaScript and jQuery: Part 2 the product and vehicle type repositories to the correspond-
(https://www.codemag.com/Article/2111031/Enhance-Your- ing private read-only fields defined in this class.
MVC-Applications-Using-JavaScript-and-jQuery-Part-2), you
learned about the starting MVC application, which was cod- The AddToCart() method is what’s called from jQuery Ajax to
ed using all server-side C#. You then added JavaScript and insert a product into the shopping cart that’s stored in the
jQuery to avoid post-backs and enhance the UX in various Session object. This code is similar to the code written in
ways. If you haven’t already read these articles, I highly rec- the MVC controller class ShoppingController.Add() method.
ommend that you read them to learn about the application After adding the id passed in by Ajax, a status code of 200
you’re enhancing in this series of articles. is passed back from this Web API call to indicate that the
Paul D. Sheriff product was successfully added to the shopping cart. At this
http://www.pdsa.com In this article, you’re going to build Web API calls that you point, you have everything you need on the back-end to add
can call from the application to avoid post-backs. You’re go- a product to the shopping cart via an Ajax call.
Paul has been in the IT ing to add calls to add, update, and delete shopping cart
industry over 33 years. information. In addition, you’re going to learn to work with Modify the Add to Cart Link
In that time, he has suc- dependent drop-down lists to also avoid post-backs. Finally, It’s now time to modify the client-side code to take advan-
cessfully assisted hundreds
you learn to use jQuery auto-complete instead of a drop- tage of this new Web API method. You no longer want a
of companies to architect
down list to provide more flexibility to your user. post-back to occur when you click on the Add to Cart link,
software applications to
so you need to remove the asp- attributes and add code
solve their toughest business
to make an Ajax call. Open the Views\Shopping\_Shop-
problems. Paul has been The Problem: Adding to Shopping pingList.cshtml file and locate the Add to Cart <a> tag and
a teacher and mentor
through various mediums
Cart Requires a Post-Back remove the asp-action=”Add” and the asp-route-id=”@
such as video courses, On the Shopping page, each time you click on an Add to Cart item.ProductId” attributes. Add id and data- attributes,
blogs, articles, and speaking button (Figure 1), a post-back occurs and the entire page and an onclick event, as shown in the code snippet below.
engagements at user is refreshed. This takes time and causes a flash on the page
groups and conferences that can be annoying to the users of your site. In addition,
around the world. it takes time to perform this post-back because all the data
Paul has 23 courses in the must be retrieved from the database server, the entire page
www.pluralsight.com library needs to be rebuilt on the server side, and then the browser
(http://www.pluralsight. must redraw the entire page. All of this leads to a poor user
com/author/paul-sheriff) experience.
on topics ranging from
JavaScript, Angular, MVC, The Solution: Create a Web API Call
WPF, XML, jQuery, and The first thing to do is to create a new Web API control-
Bootstrap. Contact Paul ler to handle the calls for the shopping cart functionality.
at [email protected]. Right mouse-click on the PaulsAutoParts project and create
a new folder named ControllersApi. Right mouse-click on
the ControllersApi folder and add a new class named Shop-
pingApiController.cs. Remove the default code in the file
and add the code shown in Listing 1 to this new class file.

Add two attributes before this class definition to tell .NET that
this is a Web API controller and not an MVC page control-
ler. The [ApiController] attribute enables some features such
as attribute routing, automatic model validation, and a few
other API-specific behaviors. When using the [ApiController]
attribute, you must also add the [Route] attribute. The route
attribute adds the prefix “api” to the default “[controller]/
[action]” route used by your MVC page controllers. You can
choose whatever prefix you wish, but the “api” prefix is a
standard convention that most developers use.

In the constructor for this API controller, inject the AppSes- Figure 1: Adding an item to the shopping cart can be more
sion, and the product and vehicle type repositories. Assign efficiently handled using Ajax.

18 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
Listing 1: Create a new Web API controller with methods to eliminate post-backs
using Microsoft.AspNetCore.Http; VehicleTypeSearch> _vehicleRepo;
using Microsoft.AspNetCore.Mvc; #endregion
using PaulsAutoParts.AppClasses;
using PaulsAutoParts.Common; #region AddToCart Method
using PaulsAutoParts.EntityLayer; [HttpPost(Name = "AddToCart")]
using PaulsAutoParts.ViewModelLayer; public IActionResult AddToCart([FromBody]int id)
{
namespace PaulsAutoParts.ControllersApi // Set Cart from Session
{ ShoppingViewModel vm = new(_repo, _vehicleRepo,
[ApiController] UserSession.Cart);
[Route("api/[controller]/[action]")]
public class ShoppingApiController : AppController // Set "Common" View Model Properties from Session
{ base.SetViewModelFromSession(vm, UserSession);
#region Constructor
public ShoppingApiController(AppSession session, // Add item to cart
IRepository<Product, ProductSearch> repo, vm.AddToCart(id, UserSession.CustomerId.Value);
IRepository<VehicleType,
VehicleTypeSearch> vrepo) : base(session) // Set cart into session
{ UserSession.Cart = vm.Cart;
_repo = repo;
_vehicleRepo = vrepo; return StatusCode(StatusCodes.Status200OK, true);
} }
#endregion #endregion
}
#region Private Fields }
private readonly IRepository<Product, ProductSearch> _repo;
private readonly IRepository<VehicleType,

<a class="btn btn-info" Listing 2: Add three methods in the pageController closure to modify the shopping cart
id="updateCart" function modifyCart(id, ctl) {
data-isadding="true" // Are we adding or removing?
onclick="pageController if (Boolean($(ctl).data("isadding"))) {
.modifyCart(@item.ProductId, this)"> // Add product to cart
addToCart(id);
Add to Cart // Change the button
</a> $(ctl).text("Remove from Cart");
$(ctl).data("isadding", false);
When you post back to the server, a variable in the view $(ctl).removeClass("btn-info")
.addClass("btn-danger");
model class is set on each product to either display the Add }
to Cart link or the Remove from Cart link. When using client- else {
side code, you’re going to toggle the same link to either // Remove product from cart
removeFromCart(id);
perform the add or the remove. Use the data-isadding at- // Change the button
tribute on the anchor tag to determine whether you’re do- $(ctl).text("Add to Cart");
ing an add or a remove. $(ctl).data("isadding", true);
$(ctl).removeClass("btn-danger")
.addClass("btn-info");
Add Code to Page Closure }
The onclick event in the anchor tag calls a method on the }
pageController called modifyCart(). You pass to this cart
the current product ID and a reference to the anchor tag function addToCart(id) {
}
itself. Add this modifyCart() method by opening the Views\
Shopping\Index.cshtml file and adding the three private function removeFromCart(id) {
methods (Listing 2) to the pageController closure: modi- }
fyCart(), addToCart(), and removeFromCart(). The modify-
Cart() method is the one that’s made public; the other two
are called by the modifyCart() method. Create the addToCart() Method
Write the addToCart() method in the pageController closure
The modifyCart() method checks the value in the data-isad- to call the new AddToCart() method you added in the Shop-
ding attribute to see if it’s true or false. If it’s true, call the pingApiController class. Because you’re performing a post,
addToCart() method, change the link text to “Remove from you may use either the jQuery $.ajax() or $.post() methods.
Cart”, set the data-isadding=”false”, remove the class I chose to use the $.post() method in the code shown in fol-
“btn-info”, and add the class “btn-danger”. If false, call the lowing snippet.
removeFromCart() method and change the attributes on the
link to the opposite of what you just set. Modify the return function addToCart(id) {
object to expose the modifyCart() method. let settings = {
url: "/api/ShoppingApi/AddToCart",
return { contentType: "application/json",
"setSearchArea": setSearchArea, data: JSON.stringify(id)
"modifyCart": modifyCart }
} $.post(settings)

codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 19
.done(function (data) { Remove from Cart immediately. Click on the “0 Items in
console.log( Cart” link in the menu bar and you should see an item in the
"Product Added to Shopping Cart"); cart. Don’t worry about the “0 Items in Cart” link; you’ll fix
}) that a little later in this article.
.fail(function (error) {
console.error(error);
});
The Problem: Delete from Shopping
} Cart Requires a Post-Back
Now that you’ve added a product to the shopping cart using
Try It Out Ajax, it would be good to also remove an item from the cart
Run the application and click on the Shop menu. Perform using Ajax. The link on the product you just added to the
a search to display products on the Shopping page. Click cart should now be displaying Remove from Cart (Figure 2).
on one of the Add to Cart links to add a product to the This was set via the JavaScript you wrote in the addToCart()
shopping cart. You should notice that the link changes to method. The data-isadding attribute has been set to a false
value, so when you click on the link again, the code in the
modifyCart() method calls the removeFromCart() method.

The Solution: Write Web API Method to Delete


from the Shopping Cart
Open the ControllersApi\ShoppingApiController.cs file and
add a new method named RemoveFromCart(), as shown in
Listing 3. This method is similar to the Remove() method
contained in the ShoppingController MVC class. A product ID
is passed into this method and the RemoveFromCart() meth-
od is called on the view model to remove this product from
the shopping cart help in the Session object. A status code
of 200 is returned from this method to indicate that the
product was successfully removed from the shopping cart.

Modify the Remove from Cart Link


You no longer want a post-back to occur when you click
on the Remove from Cart link, so you need to remove the
asp- attributes and add code to make an Ajax call. Open
the Views\Shopping\_ShoppingList.cshtml file and lo-
cate the Remove from Cart <a> tag and remove the asp-
action=”Remove” and the asp-route-id=”@item.ProductId”
attributes. Add an id and data- attributes, and an onclick
event, as shown in the code below.

<a class="btn btn-danger"


id="updateCart"
data-isadding="false"
Figure 2: Removing items from a cart can be more onclick="pageController
efficiently handled using Ajax. .modifyCart(@item.ProductId, this)">
Remove from Cart
</a>
Listing 3: The RemoveFromCart() method deletes a product from the shopping cart
[HttpDelete("{id}", Name = "RemoveFromCart")] Notice that you’re setting the id attribute to the same value as
public IActionResult RemoveFromCart(int id) on the Add to Cart button. As you know, you can’t have two HTML
{ elements with the id attribute set to the same value, because
// Set Cart from Session these two buttons are wrapped within an @if() statement, and
ShoppingViewModel vm = new(_repo,
_vehicleRepo, UserSession.Cart); only one is written by the server into the DOM at a time.

// Set "Common" View Model Properties Add Code to pageController


// from Session Open the Views\Shopping\Index.cshtml file and add code
base.SetViewModelFromSession(vm,
UserSession);
to the removeFromCart() method. Call the $.ajax() method
by setting the url property to the location of the Remove-
// Remove item to cart FromCart() method you added, and set the type property to
vm.RemoveFromCart(vm.Cart, id, “DELETE”. Pass the id of the product to delete on the URL
UserSession.CustomerId.Value); line, as shown in the following code snippet.
// Set cart into session
UserSession.Cart = vm.Cart; function removeFromCart(id) {
$.ajax({
return StatusCode(StatusCodes.Status200OK, url: "/api/ShoppingApi/RemoveFromCart/" + id,
true);
} type: "DELETE"
})

20 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
.done(function (data) { Listing 4: Add a modifyItemsInCartText() method to the mainController closure
console.log( function modifyItemsInCartText(isAdding) {
"Product Removed from Shopping Cart"); // Get text from <a> tag
}) let value = $("#itemsInCart").text();
let count = 0;
.fail(function (error) { let pos = 0;
console.error(error);
}); // Find the space in the text
pos = value.indexOf(" ");
} // Get the total # of items
count = parseInt(value.substring(0, pos));
Try It Out
Run the application and click on the Shop menu. Perform // Increment or Decrement the total # of items
if (isAdding) {
a search to display products on the Shopping page. Click count++;
on one of the Add to Cart links to add a product to the }
shopping cart. You should notice that the link changes to else {
count--;
Remove from Cart immediately. Click on the “0 Items in }
Cart” link in the menu bar and you should see an item in the
cart. Click on the back button on your browser and click the // Create the text with the new count
value = count.toString() + " " + value.substring(pos);
Remove from Cart link on the item you just added. Click on // Put text back into the cart
the “0 Items in Cart” link and you should see that there are $("#itemsInCart").text(value);
no longer any items in the shopping cart. }

The Problem: The “n Items in Cart” return { Getting the Sample Code
Link isn’t Updated "pleaseWait": pleaseWait,
You can download the sample
After modifying the code in the previous section to add and "disableAllClicks": disableAllClicks,
code for this article by visiting
remove items from the shopping cart using Ajax, you no- "setSearchValues": setSearchValues,
www.CODEMag.com under
ticed that the “0 Items in Cart” link in the menu bar isn’t up- "isSearchFilledIn": isSearchFilledIn, the issue and article, or by
dating with the current number of items in the cart. That’s "setSearchArea": setSearchArea, visiting www.pdsa.com/
because this link is generated by data from the server-side. "modifyItemsInCartText": modifyItemsInCartText downloads. Select “Articles”
Because you’re bypassing server-side processing with Ajax } from the Category drop-
calls, you need to update this link yourself. down. Then select “Enhance
Call this function after making the Ajax call to either add or your MVC Applications using
The Solution: Add Client-Side Code to Update Link remove an item from the cart. Open the Views\Shopping\ JavaScript and jQuery: Part 3”
Open the Views\Shared\_Layout.cshtml file and locate the Index.cshtml file and locate the done() method in the ad- from the Item drop-down.
“Items in Cart” link. Add an id attribute to the <a> tag and dToCart() method. Add the line shown just before the con-
assign it the value of “itemsInCart”, as shown in the follow- sole.log() statement.
ing code snippet.
$.post(settings)
<a id="itemsInCart" .done(function (data) {
class="text-light" mainController.modifyItemsInCartText(true);
asp-action="Index" console.log("Product Added to Shopping Cart");
asp-controller="Cart"> })
@ViewData["ItemsInCart"] Items in Cart // REST OF THE CODE HERE
</a>
Locate the done() method in the removeFromCart() method
Create a new method to increment or decrement the “Items and add the line of code just before the console.log() state-
in Cart” link. Open the wwwroot\js\site.js file and add a ment, as shown in the following code snippet.
new method named modifyItemsInCartText() to the main-
Controller closure, as shown in Listing 4. An argument is $.ajax({
passed to this method to specify whether you’re adding url: "/api/ShoppingApi/RemoveFromCart/" + id,
or removing an item from the shopping cart. This tells the type: "DELETE"
method to either increment the number or decrement the })
number of items in the text displayed on the menu. .done(function (data) {
mainController.modifyItemsInCartText(false);
The modifyItemsInCartText() method extracts the text por- console.log("Product Removed from Shopping Cart");
tion from the <a> tag holding the “0 Items in Cart”. It calcu- })
lates the position of the first space in the text, which allows // REST OF THE CODE HERE
you to parse the numeric portion, turn that into an integer,
and place it into the variable named count. If the value Try It Out
passed into the isAdding parameter is true, then count Run the application and click on the Shop menu. Perform
is incremented by one. If the value passed is false, then a search to display products on the Shopping page. Click
count is decremented by one. The new numeric value is then on one of the Add to Cart links to add a product to the
placed where the old numeric value was in the string and shopping cart and notice the link changes to Remove from
this new string is inserted back into the <a> tag. Expose the Cart immediately. You should also see the “Items in Cart”
modifyItemsInCartText()method from the return object on link increment. Click on the Remove from Cart link and you
the mainController closure, as shown in the following code. should see the “Items in Cart” link decrement.

codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 21
The Problem: Dependent Drop- method on the ShoppingViewModel class is called to set the
Makes property with the collection of vehicle makes that are
Downs Requires Multiple Post-Backs valid for that year. The set of vehicle makes is returned from
A common user interface problem to solve is that when you this Web API method.
choose an item from a drop-down, you then need a drop-down
immediately following to be filled with information specific to Next, add another new method named GetModels() to the
that selected item. For example, run the application and se- ShoppingApiController class to retrieve all models for a spe-
lect the Shop menu to get to the Shopping page. In the left- cific year and make as shown in Listing 6. In this method,
hand search area, select a Vehicle Year from the drop-down both a year and a vehicle make are passed in. The GetMod-
list (Figure 3). Notice that a post-back occurs and now a list els() method on the ShoppingViewModel class is called to
of Vehicle Makes are filled into the corresponding drop-down. populate the Models property with all vehicle models for
Once you choose a make, another post-back occurs and a list that specific year and make. The collection of vehicle models
of vehicle models is filled into the last drop-down. Notice the is returned from this Web API method.
flashing of the page that occurs each time you change the
year or make caused by the post-back. Modify Shopping Cart Page
It’s now time to add a couple of methods to your shopping
The Solution: Connect All Drop-Downs to Web API Services cart page to call these new Web API methods you added
To eliminate this flashing, create Web API calls to return to the ShoppingApiController class. Open the Views\Shop-
makes and models. After selecting a year from the Vehicle ping\Index.cshtml file and add a method to the pageCon-
Year drop-down, an Ajax call is made to retrieve all makes troller named getMakes(), as shown in Listing 7.
for that year in a JSON format. Use jQuery to build a new
set of <option> objects for the Vehicle Make drop-down. The The getMakes() method retrieves the year selected by the
same process can be done for the Vehicle Model drop-down user. It then clears the drop-down that holds all vehicle
as well. makes and the one that holds all vehicle models. Next, a
call is made to the GetMakes() Web API method using the
Open the ControllersApi\ShoppingApiController.cs file and $.get() shorthand method. If the call is successful, use the
add a new method named GetMakes() to get all makes of ve- jQuery each() method on the data returned to iterate over
hicles for a specific year, as shown in Listing 5. This method the collection of vehicle makes returned. For each make,
accepts the year of the vehicle to search for. The GetMakes() build an <option> element with the vehicle make within the
<option> and append that to the drop-down.

Listing 5: The GetMakes() method returns all vehicles makes for a specific year Add another method to the pageController named getMod-
[HttpGet("{year}", Name = "GetMakes")] els(), as shown in Listing 8. The getModels() method re-
public IActionResult GetMakes(int year) trieves both the year and make selected by the user. Clear
{ the models drop-down list in preparation for loading the
IActionResult ret;
new list. Call the GetModels() method using the $.get()
// Create view model shorthand method. If the call is successful, use the jQuery
ShoppingViewModel vm = new(_repo,
_vehicleRepo, UserSession.Cart);

// Get vehicle makes for the year


vm.GetMakes(year);

// Return all Makes


ret = StatusCode(StatusCodes.Status200OK,
vm.Makes);

return ret;
}

Listing 6: The GetModels() method returns all vehicle models for a specific year and model
[HttpGet("{year}/{make}", Name = "GetModels")]
public IActionResult GetModels(int year,
string make)
{
IActionResult ret;

// Create view model


ShoppingViewModel vm = new(_repo,
_vehicleRepo, UserSession.Cart);

// Get vehicle models for the year/make


vm.GetModels(year, make);

// Return all Models


ret = StatusCode(StatusCodes.Status200OK,
vm.Models);

return ret; Figure 3: Multiple post-backs occur when you select a


} different value from any of these drop-downs.

22 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
each() method on the data returned to iterate over the col- asp-items="@(new SelectList(Model.Makes))">
lection of vehicle models returned. For each model, build an </select>
<option> element with the vehicle model within the <op-
tion> and append that to the drop-down. Try It Out
Run the application and click on the Shop menu. Expand the
Because you added two new private methods to the page- “Search by Year/Make/Model” search area and select a year
Controller closure, you need to expose these two methods from the drop-down. The vehicle makes are now filled into
by modifying the return object, as shown in the following the drop-down, but the page didn’t flash because there’s
code snippet. no longer a post-back. If you select a vehicle make from the
drop-down, you should see the vehicle models filled in, but
return { again, the page didn’t flash because there was no post-back.
"setSearchArea": setSearchArea,
"modifyCart": modifyCart,
"getMakes": getMakes, The Problem: Allow a User to
"getModels": getModels Either Select an Existing Category
}
or Add a New One
Now that you have the new methods written and exposed Click on the Admin > Products menu, then click the Add
from your pageController closure, hook them up to the appro- button to allow you to enter a new product (Figure 4). No-
priate onchange events of the drop-downs for the year and tice that the Category field is a text box. This is fine if you
make within the search area on the page. Locate the <select> want to add a new Category, but what if you want the user
element for the SearchEntity.Year property and modify the
onchange event to look like the following code snippet.
Listing 7: The getMakes() method retrieves vehicle makes and builds a drop-down
<select class="form-control" function getMakes(ctl) {
onchange="pageController.getMakes(this);" // Get year selected
asp-for="SearchEntity.Year" let year = $(ctl).val();
asp-items="@(new SelectList(Model.Years))">
// Search for element just one time
</select> let elem = $("#SearchEntity_Make");

Next, locate the <select> element for the SearchEntity.Make // Clear makes drop-down
property and modify the onchange event to look like the elem.empty();
// Clear models drop-down
following code snippet. $("#SearchEntity_Model").empty();

<select class="form-control" $.get("/api/ShoppingApi/GetMakes/" +


onchange="pageController.getModels(this);" year, function (data) {
asp-for="SearchEntity.Make" // Load the makes into drop-down
$(data).each(function () {
elem.append(`<option>${this}</option>`);
});
})
.fail(function (error) {
console.error(error);
});
}

Listing 8: The getModels() method retrieves vehicle models and builds a drop-down
function getModels(ctl) {
// Get currently selected year
let year = $("#SearchEntity_Year").val();
// Get model selected
let model = $(ctl).val();

// Search for element just one time


let elem = $("#SearchEntity_Model");

// Clear models drop-down


elem.empty();

$.get("/api/ShoppingApi/GetModels/" +
year + "/" + model, function (data) {
// Load the makes into drop-down
$(data).each(function () {
elem.append(`<option>${this}</option>`);
});
})
.fail(function (error) {
Figure 4: jQuery has validation capabilities as well as an console.error(error);
auto-complete that can make your UI more responsive and });
avoid post-backs }

codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 23
to be able to select from the existing categories already as- Modify the Shopping View Model Class
signed to products? You could switch this to a drop-down Instead of calling the repository methods directly from
list, but then the user could only select an existing category controller classes, it’s best to let your view model class call
and wouldn’t be able to add a new one on the fly. What these methods. Open the ShoppingViewModel.cs file in the
would be ideal is to use a text box, but also have a drop- PaulsAutoParts.ViewModeLayer project. Add a new method
down component that shows them the existing categories to this class named SearchCategories() that makes the call
as they type in a few letters into the text box. to the repository class method you just created.

The Solution: Use jQuery UI Auto-Complete public List<string> SearchCategories(


To solve this problem, you need to bring in the jQuery UI string searchValue)
library and use the auto-complete functionality. Once added {
to your project, connect a jQuery auto-complete to the Cat- return ((ProductRepository)Repository)
egory text box so after the user starts to type, a list of exist- .SearchCategories(searchValue);
ing categories can be displayed directly under the text box. }

Modify the Product Repository Class Add New Web API to Controller
First, you need to make some changes to the back-end to It’s now time to create the Web API method for you to call
support searching for categories by finding where a category via Ajax to search for the categories based on each charac-
starts with the text the user types in. Open the Repository- ter the user types into the text box. Open the Controller-
Classes\ProductRepository.cs file in the PaulsAutoParts. sApi\ShoppingApiController.cs file and add a new method
DataLayer project and add a new method named Search- named SearchCategories() to this controller class, as shown
Categories() to this class. This method takes the characters in Listing 9. This method accepts the character(s) typed
entered by the user and queries the database to retrieve into the Category text box; if it’s blank, it returns all catego-
only those categories that start with those characters, as ries, and otherwise passes the search value to the Search-
shown in the following code snippet. Categories() method you just created.

public List<string> SearchCategories( Add jQuery UI Code to Product Page


string searchValue) Open the Views\Product\ProductIndex.cshtml file and at
{ the top of the page, just below the setting of the page title,
return _DbContext.Products add a new section to include the jquery-ui.css file. This is
.Select(p => p.Category).Distinct() needed for styling the auto-complete drop-down. Please
.Where(p => p.StartsWith(searchValue)) note that for the limits of printing this article, I had to
.OrderBy(p => p).ToList(); break the href attribute on to two lines. When you put this
} into your cshtml file, be sure to put it all on one line.

This LINQ query roughly translates to the following SQL @section HeadStyles {
query. <link rel="stylesheet"
href="//code.jquery.com/ui/1.12.1
SELECT DISTINCT Category /themes/base/jquery-ui.css">
FROM Product }
WHERE Category LIKE 'C%'
Now go to the bottom of the file and in the @section
Scripts and just before your opening <script> tag, add the
Listing 9: The SearchCategories() method performs a search for categories based on user input following <script> tag to include the jQuery UI JavaScript
[HttpGet("{searchValue}", file. Please note that for printing this article, I had to break
Name = "SearchCategories")] the src attribute into two lines. When you put this into your
public IActionResult SearchCategories( cshtml file, be sure to put it all on one line.
string searchValue)
{
IActionResult ret; <script src="https://code.jquery.com/ui
/1.12.1/jquery-ui.js">
ShoppingViewModel vm = new(_repo, </script>
_vehicleRepo, UserSession.Cart);

if (string.IsNullOrEmpty(searchValue)) {
Add a new method to the pageController closure named
// Get all product categories categoryAutoComplete(). The categoryAutoComplete()
vm.GetCategories(); method is the publicly exposed method that’s called from
the $(document).ready() to hook up the auto-complete to
// Return all categories the category text box using the autocomplete() method.
ret = StatusCode(StatusCodes.Status200OK,
vm.Categories); Pass in an object to the autocomplete() method to set the
} source property to a method named searchCategories(),
else { which is called to retrieve the category data to display in
// Search for categories the drop-down under the text box. The minLength prop-
ret = StatusCode(StatusCodes.Status200OK,
vm.SearchCategories(searchValue)); erty is set to the minimum number of characters that must
} be typed prior to making the first call to the searchCatego-
ries() method. I’ve set it to one, so the user must type in at
return ret; least one character in order to have the searchCategories()
}
method called.

24 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
function categoryAutoComplete() { Vehicle Type maintenance page, when the user wants to add
// Hook up Category auto-complete a new vehicle, they should be able to either add a new make
$("#SelectedEntity_Category").autocomplete({ or select from an existing one.
source: searchCategories,
minLength: 1 Add New Method to Vehicle Type Repository
}); Let’s start with modifying the code on the server to support
} searching for vehicle makes. Open the RepositoryClasses\
VehicleTypeRepository.cs file and add a new method named
Add the searchCategories() method that’s called from the SearchMakes(), as shown in the following code snippet.
source property. This method must accept a request object
and a response callback function. This method uses the public List<string> SearchMakes(string make)
$.get() method to make the Web API call to the SearchCat- {
egories() method passing in the request.term property, return _DbContext.VehicleTypes
which is the text the user entered into the category text .Select(v => v.Make).Distinct()
box. If the call is successful, the data retrieved back from .Where(v => v.StartsWith(make))
the Ajax call is sent back via the response callback function. .OrderBy(v => v).ToList();
}
function searchCategories(request, response) {
$.get("/api/ShoppingApi/SearchCategories/" + This LINQ query roughly translates to the following SQL
request.term, function (data) { query:
response(data);
}) SELECT DISTINCT Make
.fail(function (error) { FROM Lookup.VehicleType
console.error(error); WHERE Make LIKE 'C%'
});
} Add a New Method to Vehicle Type View Model Class
Instead of calling the repository methods directly from
Modify the return object to expose the categoryAutoCom- controller classes, it’s best to let your view model class call
plete() method.

return {
"setSearchValues": setSearchValues,
"setSearchArea": mainController.setSearchArea, ADVERTISERS INDEX
"isSearchFilledIn":
mainController.isSearchFilledIn,
"categoryAutoComplete": categoryAutoComplete Advertisers Index
}
CODE Consulting
Finally, modify the $(document).ready() function to call the www.codemag.com/code 7
pageController.categoryAutoComplete() method to hook
up jQuery UI to the Category text box. CODE Consulting
www.codemag.com/onehourconsulting 75
$(document).ready(function () {
CODE Legacy
// Setup the form submit
www.codemag.com/modernize 76
mainController.formSubmit();
DevIntersection
// Hook up category auto-complete www.devintersection.com 2
pageController.categoryAutoComplete();
dtSearch
www.dtSearch.com 11
// Collapse search area or not? Advertising Sales:
pageController.setSearchValues(); LEAD Technologies Tammy Ferguson
832-717-4445 ext 26
// Initialize search area on this page www.leadtools.com 5 [email protected]
pageController.setSearchArea();
Live on Maui
});
www.live-on-maui.com 38
Try It Out
Run the application and select the Admin > Products menu.
Click on the Add button and click into the Category text box.
Type the letter T in the Category input field and you should see
a drop-down appear of categories that start with the letter T.

Vehicle Type Page Needs Auto- This listing is provided as a courtesy


to our readers and advertisers.
Complete for Vehicle Make Text Box The publisher assumes no responsibi-
There’s another page in the Web application that can ben- lity for errors or omissions.
efit from the jQuery UI auto-complete functionality. On the

codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 25
Listing 10: Create a new Web API controller for handling vehicle type maintenance
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc; #region Private Fields
using PaulsAutoParts.AppClasses; private readonly IRepository<VehicleType,
using PaulsAutoParts.Common; VehicleTypeSearch> _repo;
using PaulsAutoParts.DataLayer; #endregion
using PaulsAutoParts.EntityLayer;
using PaulsAutoParts.ViewModelLayer; #region SearchMakes Method
[HttpGet("{make}", Name = "SearchMakes")]
namespace PaulsAutoParts.ControllersApi public IActionResult SearchMakes(string make)
{ {
[ApiController] IActionResult ret;
[Route("api/[controller]/[action]")]
public class VehicleTypeApiController : VehicleTypeViewModel vm = new(_repo);
AppController
{ // Return all makes found
#region Constructor ret = StatusCode(StatusCodes.Status200OK,
public VehicleTypeApiController( vm.SearchMakes(make));
AppSession session,
IRepository<VehicleType, return ret;
VehicleTypeSearch> repo) : base(session) }
{ #endregion
_repo = repo; }
} }
#endregion

these methods. Open the VehicleTypeViewModel.cs file in Add a new method to the pageController closure named
the PaulsAutoParts.ViewModeLayer project. Add a new makesAutoComplete(). The makesAutoComplete() method
method to this class named SearchMakes() that makes the is the publicly exposed method that is called from the
call to the repository class method you just created. $(document).ready() to hook up the jQuery auto-complete
to the vehicle makes text box, just like you did for hooking
public List<string> SearchMakes( up the category text box. Pass in an object to this method
string searchValue) that sets the source property to a method named search-
{ Make(), which is called to retrieve the vehicle makes to dis-
return ((VehicleTypeRepository)Repository) play in the drop-down under the text box. The minLength
.SearchMakes(searchValue); property is set to the minimum number of characters that
} must be typed prior to making the first call to the search-
Makes() method. I’ve set it to one, so the user must type in
Create New API Controller Class at least one character in order to have the searchMakes()
Add a new class under the ControllersApi folder named Ve- method called.
hicleTypeApiController.cs. Replace all the code within the
new file with the code shown in Listing 10. Most of this function makesAutoComplete() {
code is boiler-plate for a Web API controller. The important // Hook up Makes auto-complete
piece is the SearchMakes() method that’s going to be called $("#SelectedEntity_Make").autocomplete({
from jQuery Ajax to perform the auto-complete. source: searchMakes,
minLength: 1
Add jQuery UI Code to Vehicle Type Page });
Open the Views\VehicleType\VehicleTypeIndex.cshtml }
file and, at the top of the page, just below the setting of
the page title, add a new section to include the jquery- Add the searchMakes() method that is called from the source
ui.css file. This is needed for styling the auto-complete property. This method must accept a request object and a
drop-down. response callback function. This method uses the $.get()
method to make the Web API call to the SearchMakes()
@section HeadStyles { method passing in the request.term property, which is the
<link rel="stylesheet" text the user entered into the vehicle makes text box. If the
href="//code.jquery.com/ui/1.12.1 call is successful, the data retrieved back from the Ajax call
/themes/base/jquery-ui.css"> is sent back via the response callback function.
}
function searchMakes(request, response) {
Now go to the bottom of the file and in the @section $.get("/api/VehicleTypeApi/SearchMakes/" +
Scripts and just before your opening <script> tag, add the request.term, function (data) {
following <script> tag to include the jQuery UI JavaScript response(data);
file. })
.fail(function (error) {
<script src="https://code.jquery.com/ui console.error(error);
/1.12.1/jquery-ui.js"> });
</script> }

26 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
Modify the return object to expose the makesAutoCom- Listing 11: The searchModels() method passes three values to the SearchModels() Web API
plete() method.
function searchModels(request, response) {
let year = $("#SelectedEntity_Year").val();
return { let make = $("#SelectedEntity_Make").val();
"setSearchValues": setSearchValues,
"setSearchArea": if (make) {
mainController.setSearchArea, $.get("/api/VehicleTypeApi/
"isSearchFilledIn": SearchModels/" + year + "/" +
make + "/" +
mainController.isSearchFilledIn,
request.term, function (data) {
"addValidationRules": addValidationRules, response(data);
"makesAutoComplete": makesAutoComplete })
} .fail(function (error) {
console.error(error);
Finally, modify the $(document).ready() function to call the });
}
pageController.makesAutoComplete() method to hook up
else {
jQuery UI to the vehicle makes text box. searchModels(request, response);
}
$(document).ready(function () { }
// Add jQuery validation rules
pageController.addValidationRules();

// Hook up makes auto-complete


pageController.makesAutoComplete(); .Select(v => v.Model).Distinct()
.OrderBy(v => v).ToList();
// Setup the form submit }
mainController.formSubmit();
Add aNew Method to the Vehicle Type View
// Collapse search area or not?
Model Class
pageController.setSearchValues(); Instead of calling repository methods directly from con-
// Initialize search area on this page troller classes, it’s best to let your view model class call
pageController.setSearchArea(); these methods. Open the VehicleTypeViewModel.cs file in
}); the PaulsAutoParts.ViewModeLayer project. Add a new
method to this class named SearchModels() that makes
Try It Out the call to the repository class method you just created.
Run the application and select the Admin > Vehicle Types
menu. Click on the Add button and click into the Makes text public List<string> SearchModels(int year,
box. Type the letter C and you should see a drop-down ap- string make, string searchValue)
pear of makes that start with the letter C. {
return ((VehicleTypeRepository)Repository)
.SearchModels(year, make, searchValue);
Search by Multiple Fields in }
Auto-Complete
Technically, in the last sample, you should also pass the year
that the user input to the vehicle makes auto-complete.
However, just to keep things simple, I wanted to just pass
in a single item. Let’s now hook up the auto-complete for Users will be grateful
the vehicle model input. In this one, you’re going to pass in for AutoComplete.
the vehicle year, make, and the letter typed into the vehicle
model text box to a Web API call from the auto-complete
method.
Modify API Controller
Add a New Method to the Vehicle Type Repository Open the ControllersApi\VehicleTypeApiController.cs file
Open the RepositoryClasses\VehicleTypeRepository.cs file and add a new method named SearchModels() that can be
and add a new method named SearchModels(), as shown in called from the client-side code. This method is passed the
the following code snippet. This method makes the call to year, make, and model to search for. It initializes the Vehi-
the SQL Server to retrieve all distinct vehicle models for the cleTypeViewModel class and makes the call to the Search-
specified year, make, and the first letter or two of the model Models() method to retrieve the list of models that match
passed in. the criteria passed to this method.

public List<string> SearchModels( [HttpGet("{year}/{make}/{model}",


int year, string make, string model) Name = "SearchModels")]
{ public IActionResult SearchModels(int year,
return _DbContext.VehicleTypes string make, string model)
.Where(v => v.Year == year && {
v.Make == make && IActionResult ret;
v.Model.StartsWith(model))

codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 27
Listing 12: Hook up the auto-complete functionality by calling the appropriate method in the pageController closure
$(document).ready(function () {
// Add jQuery validation rules // Setup the form submit
pageController.addValidationRules(); mainController.formSubmit();

// Hook up makes auto-complete // Collapse search area or not?


pageController.makesAutoComplete(); pageController.setSearchValues();
// Initialize search area on this page
// Hook up models auto-complete pageController.setSearchArea();
pageController.modelsAutoComplete(); });

SPONSORED SIDEBAR: VehicleTypeViewModel vm = new(_repo); Modify the $(document).ready() function to make the call
to the pageController.modelsAutoComplete() method, as
Need FREE Project Advice? // Return all models found shown in Listing 12.
CODE Can Help! ret = StatusCode(StatusCodes.Status200OK,
vm.SearchModels(year, make, model)); Try It Out
No strings, free advice on Run the application and click on the Admin > Vehicle Types
new or existing software return ret; menu. Click on the Add button and put the value 2000 into
development projects.
} the Year text box. Type/Select the value Chevrolet in the
CODE Consulting experts
Makes text box. Type the letter C in the Models text box and
have experience in cloud,
Web, desktop, mobile,
Modify the Page Controller Closure you should see a few models appear.
microservices, containers, Open the Views\VehicleType\VehicleTypeIndex.cshtml
and DevOps projects. file. Add a new method to the pageController closure
Schedule your free hour named modelsAutoComplete(). The modelsAutoComplete()
of CODE call with our method is the publicly exposed method called from the Calling Web API methods
expert consultants today. $(document).ready() to hook up the auto-complete to the from jQuery Ajax can greatly
For more information, models text box using the autocomplete() method. Pass
visit www.codemag.com/ in an object to this method to set the source property speed up the performance
consulting or email us at to the function to call to get the data to display in the of your Web pages
[email protected]. drop-down under the text box. The minLength property
is set to the minimum number of characters that must be
typed prior to making the first call to the searchModels()
function.
Summary
function modelsAutoComplete() { In this article, you once again added some functionality to
// Hook up Models AutoComplete improve the user experience of your website. Calling Web
$("#SelectedEntity_Model").autocomplete({ API methods from jQuery Ajax can greatly speed up the per-
source: searchModels, formance of your Web pages. Instead of having to perform
minLength: 1 a complete post-back and redraw the entire Web page, you
}); can retrieve a small amount of data and update just a small
} portion of the Web page. Eliminating post-backs is probably
one of the best ways to improve the user experience of your
Add the searchModels() method (Listing 11) that’s called Web pages. Another technique you learned in this article
from the source property. This method must accept a re- was to take advantage of the jQuery UI auto-complete func-
quest object and a response callback function. This method tionality.
uses the $.get() method to make the Web API call to the
SearchModels() method passing in the year, make, and the  Paul D. Sheriff
request.term property, which is the text the user entered 
into the vehicle model text box. If the call is successful, the
data retrieved back from the Ajax call is sent back via the
response callback function.

Modify the return object to expose the modelsAutoCom-


plete() method from the closure.

return {
"setSearchValues": setSearchValues,
"setSearchArea":
mainController.setSearchArea,
"isSearchFilledIn":
mainController.isSearchFilledIn,
"addValidationRules": addValidationRules,
"makesAutoComplete": makesAutoComplete,
"modelsAutoComplete": modelsAutoComplete
}

28 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
ONLINE QUICK ID 2201041

Software Development is
Getting Harder, Not Easier
I hate that expression, “It’s complicated.” When people tell me it’s complicated, it almost always isn’t. What they’re really saying
is that they don’t want to talk about it. But when we’re talking about software development, it really IS complicated. Software
development is notoriously difficult and it’s getting steadily harder. And we need to talk about it. Imagine a world where a

developer who understands the basics of programming who previously touted giving teams complete autonomy
(things like variables, branching, and looping) can spend over their platforms and toolsets, to choosing and stan-
a weekend reading a manual cover to cover and become dardizing on a smaller subset. These companies are coping
productive in a new environment by Monday morning. An with too many choices, which leads to both decision pa-
environment that includes built-in UI, database and report- ralysis and an inability to get everyone on the same page.
ing tools, its own IDE, integration paths for third-party Constraining the number of choices is one way to combat
systems, and everything else they might need to build and complexity.
maintain a system. A world where the tool is only updat-
ed about once every two years or so and where reading a
couple of articles or attending a conference for a few days Mike Yeager
gets them up to speed on ALL the new stuff. A world where Too many choices leads to both www.internet.com
developers get very skilled at using the tools and become
masters of their craft.
decision paralysis and an inability Mike is the CEO of EPS’s
to get everyone on the same page. Houston office and a skilled
That world existed between 20 and 30 years ago. Since then, .NET developer. Mike excels
at evaluating business
the developer’s world has accelerated and expanded at an
requirements and turning
increasing pace. Today, just keeping up with what’s avail-
them into results from
able to us is like drinking from a fire hose. Where do we even Higher expectations, larger systems, and larger teams
development teams.
find time to write code? are other drivers of complexity. Development is no longer He’s been the Project Lead
just getting the code to function and doing some perfor- on many projects at EPS and
mance tuning. It also has to be secure, scalable, a great promotes the use of modern
user experience, accessible from a variety of devices, cloud- best practices, such as
Before you go thinking this article deployable, tested, maintainable, kept up to date, and con- the Agile development
is about nostalgia, it isn’t. tinuously improved. Also, at one time, most apps targeted paradigm, use of design
internal users; now most apps are targeted at consumers, patterns, and test-drive
which ups the bar considerably. and test-first development.
Before coming to EPS,
I believe in the original definition of nostalgia, as a medi- The move toward both more systems and larger systems Mike was a business owner
cal diagnosis for grave illness, one that couldn’t be cured, requires more teams and more developers, which leads to developing a high-profile
even when the patient was lucky enough to be able to try to difficulties getting everyone rowing the boat in the same di- software business in the
relive those memories. I believe we can’t—and shouldn’t— rection. When I say teams in this article, I don’t just mean a leisure industry. He grew the
go back, only forward. But all hope is not lost. Let’s start development team within a company. I also mean the team business from two employees
a conversation about complexity and begin to deal with it. developing the toolsets and platforms and those consuming to over 30 before selling the
what the team is building. We must work with teams both company and looking
Where is complexity coming from? I believe it stems from inside and outside our walls. This complexity can be miti- for new challenges.
Implementation experience
the plethora of platforms and toolsets, higher expectations, gated by better collaboration.
includes .NET, SQL Server,
larger systems, bigger teams, faster delivery of new ideas,
Windows Azure, Microsoft
and an ever-expanding set of opinions. Today, a full-stack The collaboration problem isn’t unique to the software
Surface, and Visual FoxPro.
developer must have a good working knowledge of mul- industry, and we’re getting better at meeting these chal-
tiple user interface tools, databases, cloud offerings, and lenges, but the trends aren’t going to slow down or reverse
business logic tools. Even picking just one UI platform, say themselves.
HTML and JavaScript, there are hundreds of frameworks to
choose from and they need to be mixed and matched to get The biggest drivers of complexity, in my experience, and
a good result. New frameworks and major updates to exist- those we can do the most about, are last two I mentioned.
ing frameworks are coming out almost daily. No one can The faster delivery of new ideas and the ever-expanding set
make a perfect decision. of opinions have, ironically, sprung from our own attempts
to solve complexity issues and to make things easier. Today,
That’s just for development; I didn’t include new Web everyone who solves an obstacle to coding productivity can
assembly-centric UIs like Blazor. I also ignored desktop publish and promote that solution. The solutions are often
and native mobile applications and more, and included quite good and are easy to find on the Internet by anyone
choosing and learning an IDE to develop in. The number of having a similar problem. However, sometimes the solution
platforms and toolsets available today is astounding and becomes widely accepted and is thought of as the “right
the pendulum is now swinging back for many companies way” to code.

codemag.com Software Development is Getting Harder, Not Easier 29


What our industry lacks is a good way to tell if a particu- means new tools and new patterns, and that actually ADDS
lar solution even applies to a project. We sometimes adopt complexity to the system as a whole. It results in a new
things like SOLID principles, microservices, containers, Do- (but usually manageable) set of problems and more code
main Driven Design, and container orchestration because to write, test, and maintain. Because of this, we have to be
they’re considered “good” or they’re thought of as a way to very careful in choosing how we approach each problem. It’s
future-proof our solution, when in fact, they are often solu- time we stopped living by the dictum that every problem can
tions to problems we don’t have. In those cases, we’re only be solved by adding another layer of abstraction and start
adding complexity by adopting them. asking if it should be solved that way.

How, then, can we be better at using the right tools at the What can we do about the complexity that comes from the
right time? There’s no magic solution, only hard work, care- steady gushing of new ideas? I believe that we, as an in-
ful reflection, and creativity. Let’s examine the problem. dustry, should be more demanding of ourselves and of oth-
ers. Instead of looking only at all of the good that comes
from something new, we should demand to also know the
How to Solve the Problem of Complexity downsides and demand help in determining when and where
When we think about how to solve the problem of complex- these ideas can be used to our advantage. Often, when look-
ity, the answer invariably comes down to breaking down a ing at a new technology, the material is presented by diving
complex system into a series of smaller, more discrete, less in and showing how it works. There isn’t even a mention of
complex systems that aren’t too complex standing alone. the tradeoffs or even which problems are being solved.
This makes the new modules more approachable and the
goals easier to achieve. But nothing comes for free and the
catch is that we now have to communicate and coordinate
among these smaller, more discreet systems, which often Wouldn’t it be refreshing to read
about a new technology and
why it was created, which things
it does really well, and which things
it isn’t intended for instead of

People Want To Be Led


reading about how sliced bread
has a new challenger?
& Developed, Not Managed.
Leadr is a people development software New technologies should be approached with litmus tests
that helps you engage and grow every and rules of thumb, even if we have to develop them our-
selves. Is this new idea “good” or “bad”? It really depends on
person on your team. the problem you’re trying to solve. What we need are better,
We’re Hiring a Principal Graph more available ways to make that determination. Wouldn’t
Database Engineer it be refreshing to do some research into a new technol-
Remote, so long as you are living in the US ogy and read about why it was created, which things it does
really well, and which things it isn’t intended for, instead
of reading about how sliced bread has a new challenger? I
YOU WILL BE SET TO CRUSH • You are a graph database
see this kind of documentation occasionally, but not nearly
THIS ROLE IF... expert, and are an expert in
enough. It does us all a disservice to become such a fanboy
• You have 10+ years of working with graph and
of a technology that we don’t present information that al-
experience building web other graph database solutions.
lows our peers to make good decisions. That information is
applications, especially Sass • You are highly proficient in other at least as valuable as the sunshine we spread. Often more.
applications. types of databases, such as
• You have prior software relational and document When thinking about the rapid growth of complexity and
engineering experience on a B2B database storage solutions. information in development, I often think about industries
or B2C SaaS product with • You have a masters degree or that have gone through this kind of evolution ahead of us.
a SDLC similar to what we are PhD in Information Systems. The airliner industry is a good example. The early airliner
striving towards. industry had a lot of competitors and the challenges were
• You can devise engineering mainly engineering problems. Building commercial air-
• You come from a startup solutions by considering multiple
background and are comfortable planes had been figured out and the challenge was in im-
options and then present the proving the airplanes. As the industry matured, the planes
working in a fast-paced pros/cons including cost, effort,
environment where you became increasingly larger and more complex and compa-
complexity, scalability, nies handled the complexity by adding more draftsmen,
sometimes need to solve maintainability, etc. as part of your
problems on your own, and you more engineers, more workstations, more bodies. They next
proposal and walk others through handled the increased complexity by breaking the process
can drive a project to completion. your decision matrix. down into smaller, more discreet processes. Some employ-
Remuneration for this role is fixed at $120,000 ees only worked on cabin interiors, some only worked on
Leadr is an Equal Opportunity-Affirmative Action Employer. aerodynamics, some only on landing gear. All this segrega-
tion and specialization added complexities in other areas,
like coordination among the teams and common goals. Hun-

30 Software Development is Getting Harder, Not Easier codemag.com


dreds of draftsmen with pencils were generating millions fit the requirements of a particular project and are easy to
of drawings to be presented and argued over in endless reason about. We must allow others to make good, informed
meetings. New designs started to take years, sometimes decisions about what we build.
decades, to come to fruition. Companies failed or were ac-
quired by other companies. This is where the next big innovation will happen. Evolv-
ing technologies, tools, platforms, ideas, and design pat-
The process had become too complex. This is where our in- terns are good things, but they’re only useful if they can be
dustry is now. Developers are specializing more and we have discovered, understood, and leveraged. I believe we need
lots of bodies all headed in different directions. a self-curating system beyond the plethora of take-it-or-
leave-it open-source projects and the commercially driven
Companies like Boeing and Airbus innovated and began us- resources we have now. Our current situation has taken
ing software and collaboration tools that replaced much of shape and that shape is a pile. And it’s a very large and un-
the manual engineering, drafting, and meetings, and things ruly pile that’s overwhelming and doesn’t serve our needs.
began to speed up again, but not for the reason you may We must organize that pile if we are to evolve.
think. Software wasn’t a magic bullet. All of that innovation
also led to new capabilities, offsetting or even increasing
complexity instead of reducing it. Todays, airliners are more Conclusion
complex than ever and the next generation will be even There are things we can do to keep complexity from killing
more complex. us. Currently, each of us is faced with a gigantic pile of new
ideas, each labeled with a catchy name, some industry buzz-
What turned things around for the airliner manufacturers words and some marketing blurbs. The pile is too big and
was that the teams and their software were getting better too messy. We need to change that so that each idea in the
at working together; they created and adhered to standards pile is labeled with its categorizations and specifications, so
and, most critically, they simplified the work other teams we can organize them and make rational and appropriate
need to do to make use of what they’d built. The individual decisions about them. We need to specialize so that we can
components were trending toward becoming black boxes reduce the sheer number of ideas that we need to worry
that either did or didn’t fit the requirements of a particu- about and increase our understanding of and improve our
lar project and were relatively easy to make decisions about, discourse about those we do care about. We need to create
whether that decision was to modify the black box or to build better ways to collaborate with other specialists and their
a new one. It wasn’t the tooling or the innovation that made parts of the pile. We need to pick reasonable, curated de-
the defining difference. It was the ethos that each team was faults, so that not every specialist has to decide between
part of a larger team and that how those teams interacted every option in their part of the pile, every time. And finally,
was as important as what that team actually produced. we need to develop a professional approach to these ideas
so we can train the next generation, not just on looping and
branching and technical basics, but also on how to make
good use of the organized, categorized, curated, and well-
Each team is part of a larger team described pile of ideas we’re creating.
and how those teams interact It is complicated. But there’s hope.
is as important as what that team
produces.  Mike Yeager


As the software industry continues to evolve, the unfettered


gushing of new ideas, tools, technologies, and platforms
must become more stringent. Developers aren’t keeping up.
All of the wonderful toys have become as much a burden on
the industry as a blessing. Software projects still fail at an
astoundingly high rate and it’s almost always because ei-
ther the requirements are too complex or the chosen imple-
mentation is too complex.

How teams interact is as important


as what the team produces.

Complexity is killing us. Our industry needs to develop the


airliner ethos that each team (in or out of your walls) is part
of a larger team and that how those teams interact is as
important as what the team produces. Our efforts must not
be only to build newer, shinier, better, faster things, but to
trend toward building black boxes that either do or don’t

codemag.com Software Development is Getting Harder, Not Easier 31


ONLINE QUICK ID 2201051

Beginner’s Guide to Deploying


PHP Laravel on the Google Cloud
Platform: Part 2
In the first article in this series (CODE Magazine, November/December 2021), you were introduced to Google Cloud Platform
(GCP) and the PHP Laravel framework. You started by creating your first PHP Laravel project and pushed the app to a GitHub
repository. Then you moved on to creating the Google App Engine (GAE) project and built the Google Cloud Build workflow to

enable CI/CD (https://www.redhat.com/en/topics/devops/ Sail picks up the database details from the current applica-
what-is-ci-cd) for the automated deployments on GCP. tion environment variables that you usually define inside
the .env file.
Now, you’ll go a step further and connect your app to a local
MySQL database. Then, you’ll introduce the Google Cloud Let’s have a look at the database section of the .env file:
SQL service and create your first SQL database in the cloud.
Right after that, I’ll show you one way to run Laravel data- DB_CONNECTION=mysql
base migrations from within the Cloud Build workflow. Fi- B_HOST=mysql
nally, I’ll enhance the Cloud Build workflow by showing you DB_PORT=3306
Bilal Haidar how to back up the SQL database every time you deploy a DB_DATABASE=laravel
[email protected] new version of the app on GCP. DB_USERNAME=sail
https://www.bhaidar.dev DB_PASSWORD=password
@bhaidar First things first, let’s locally connect the Laravel app to a
MySQL database. These are the default settings that ship with a new Laravel
Bilal Haidar is an
accomplished author, application using Sail service.
Microsoft MVP of 10 years,
ASP.NET Insider, and has
Create and Use a Local MySQL Database You can change the settings as you see fit. For now, I just
been writing for CODE In Part I of this series, I introduced Laravel Sail (https:// changed the database name to be gcp_app.
Magazine since 2007.  laravel.com/docs/8.x/sail). It’s a service offered by the
Laravel team to dockerize your application locally. One of
With 15 years of extensive the containers that Sail creates locally, inside Docker, is the
experience in Web develop- mysql container. It holds a running instance of MySQL Da- In Laravel 8.x, you can use any of
ment, Bilal is an expert in tabase Service. Listing 1 shows the mysql service container
providing enterprise Web section inside the docker-compose.yaml file.
the following database systems:
solutions. MySQL 5.7+, PostgreSQL 9.6+,
When you start the Sail service, it creates a Docker contain- SQLite 3.8.8+, or SQL Server 2017.
He works at Consolidated
er for the mysql service and automatically configures it with
Contractors Company in
a MySQL Database that’s ready to use in your application.
Athens, Greece as a full-
stack senior developer.
Step 1: Create and Connect to a MySQL Database
Bilal offers technical For now, let’s keep them as they are and start up the Docker
consultancy for a variety containers using the Sail service command:
of technologies including
Nest JS, Angular, Vue JS, sail up --d
JavaScript and TypeScript. 
This command starts up all services that the docker-com-
pose.yaml file hosts.

Let’s connect to the database that sail has created. I’m us-
ing TablePlus (https://tableplus.com/) Database Manage-
ment Tool to connect to the new database. Feel free to use
any other management tool of your own preference. Figure
1 shows the database connection window.

I’ve highlighted the important fields that need your attention:

• Name: The name of the connection


• Host: The IP address of the server hosting the MySQL
Figure 1: The TablePlus Connection Window database

32 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 2: Open database connection

• User: The database user Listing 1: .env Database Settings


• Password: The user password mysql:
• Database: The name of the database image: 'mysql:8.0'
ports:
Once you’re done filling in all the necessary fields, click the - '${FORWARD_DB_PORT:-3306}:3306'
environment:
Connect button. Figure 2 shows the database tables. MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
You might be wondering where the tables came from. This is MYSQL_USER: '${DB_USERNAME}'
the result of creating and running the app in the first part of MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
this series. You ran a command to migrate the database and volumes:
create all the tables that shipped with Laravel. - 'sailmysql:/var/lib/mysql'
networks:
The command to run is: - sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
sail artisan migrate:fresh retries: 3
timeout: 5s
Now that you’ve successfully connected your app to a fresh
copy of a MySQL database, let’s build a page to manage code
editors that you want to keep track of in your database.
Locate the \resources\views folder and create the editors.
Step 2: Build the Coding Editors’ Management Page blade.php file. Inside this file, paste the content you can find
Let’s build a simple page to manage a list of coding editors. in this public GitHub Gist: Editors View (https://gist.github.
The goal is to allow you to add a new coding editor, with com/bhaidar/e9c4516074a2346f0ce226ce92003cfc).
some details, and view the list of all editors you’re adding.
The view is straightforward. It consists of an HTML Form to
Step 2.1: Install and Configure Tailwind CSS allow the user to add a new editor, and a table underneath
Start by installing and configuring Tailwind CSS (https:// to show all stored editors in the database. This will do the
tailwindcss.com/) in the Laravel project. It’s a booming CSS job to demonstrate a Laravel database connection.
framework that I’ll use to style the page you’re building in
this section. You can check out their online guide on how Step 2.3: Add a Route for the New View
to install and use it inside a Laravel project (https://tail- To access the new view in the browser, let’s add a new route
windcss.com/docs/guides/laravel). to point to this new view. Locate the \routes\web.php file
and append the following two routes:
Another option is to make use of Laravel-frontend-presets/
tailwindcss package, a Laravel front-end scaffolding preset Route::get(
for Tailwind CSS (https://github.com/laravel-frontend-pre- '/editors', [
sets/tailwindcss). \App\Http\Controllers\EditorController::class,
'index'
Step 2.2: Create the Editors’ Blade Page ])->name('editors.index');
Now that you’ve installed and configured Tailwind CSS,
let’s create your first Blade (https://laravel.com/docs/8.x/ Route::post(
blade) view to manage the editors. '/editors', [

codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 33
\App\Http\Controllers\EditorController::class, The routes use the EditorController that you haven’t created
'store' yet. Let’s run a new Artisan command to create this controller:
]);
sail artisan make:controller EditorController
The first route allows users to access the view (GET) and has
a route name of editors.index. The second route, on the This command creates a new controller under the \App\
other hand, allows executing a POST request to create a new Http\Controllers folder.
editor record in the database.
The index() action retrieves all stored editors in the database
and returns the editors view together with the data to display.
Listing 2: Store() method
public function store(Request $request) public function index()
{ {
$request->validate([ $editors = Editor::all();
'name' => 'required', return view('editors', compact('editors'));
'company' => 'required',
'operating_system' => 'required', }
'license' => 'required',
]); The store() action takes care of storing a new editor record in
the database. It validates the POST request to make sure that
Editor::create($request->all());
all required fields are there. Then, it creates a new editor re-
return redirect()->route('editors.index') cord in the database. Finally, it redirects the user to the edi-
->with('success', 'Editor created successfully.'); tors.index route (you have defined this inside \routes\web.
} php). Listing 2 shows the store() action source code entirely.

Step 2.4: Create a Laravel Migration


To store information about coding editors, you need to cre-
Listing 3: Database migration
ate a corresponding database table. In Laravel, you need
class CreateEditorsTable extends Migration to create a new database migration and run it against the
{ connected database. Welcome to Artisan Console!
/**
* Run the migrations.
* Laravel ships with Artisan (https://laravel.com/docs/8.x/
* @return void artisan), a command-line interface (CLI), that offers many
*/ commands to create functionality in the application. It’s lo-
public function up()
{ cated at the root of the project folder and can be called like
Schema::create('editors', function (Blueprint $table) { any other CLI on your computer.
$table->id();
$table->string('name', 255); To create a new Laravel database migration, run the follow-
$table->string('company', 500);
$table->string('operating_system', 500); ing command:
$table->string('license', 255);
$table->timestamp('created_at')->useCurrent(); sail artisan make:model Editor -m
$table->timestamp('updated_at')->nullable();
});
}
This command makes a new Model (https://laravel.com/
docs/8.x/eloquent) class together with a database migra-
/** tion file (-m option).
* Reverse the migrations.
*
* @return void
*/
public function down() Laravel’s make:model Artisan
{
Schema::dropIfExists('editors'); command can generate a
}
} model, controller, migration, and
factory in one single command.
You can run the options using -mcf.
Listing 4: Editor Model class
class Editor extends Model
{
use HasFactory; Locate the \database\migrations folder and open the new
migration PHP file that you just created. Replace the con-
protected $fillable = [
'name', tent of this file with the content showing in Listing 3.
'company',
'operating_system', The core of every migration file is the up() method.
'license',
'created_at'
Schema::create('editors',
];
} function (Blueprint $table) {
$table->id();

34 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 3: Editors’ view in the browser

$table->string('name', 255); and make sure the database connection is up and running.
$table->string('company', 500); Figure 3 shows the Editors’ view in the browser.
$table->string('operating_system', 500);
$table->string('license', 255); That’s it! Now that you’ve successfully connected your app
$table->timestamp('created_at')->useCurrent(); to a local database, let’s explore Google Cloud SQL and con-
$table->timestamp('updated_at')->nullable(); figure the app to use one.
});
Before moving on, make sure you commit and push your
Inside the up() method, you’re creating the editors table change onto GitHub. Keep in mind that this action triggers
and specifying the columns that should go under it. Listing 4 the GCP Cloud Build Workflow to deploy a new version of
shows the Editor model class. your Laravel app.

I’ve added the $fillable property to whitelist the columns


that are available for the mass assignment. This comes in
Create and Use a Google Cloud SQL
later when creating and storing editors in the database. Database
Google Cloud SQL is a fully managed relational database
You can read more about $fillable and $guarded in Lara- that supports MySQL (https://www.mysql.com/), Post-
vel by checking this awesome and brief introduction on the greSQL (https://www.postgresql.org/), and Microsoft SQL
topic: https://thomasventurini.com/articles/fillable-vs- Server (https://www.microsoft.com/en-us/sql-server/sql-
guarded-on-laravel-models/. server-downloads).

Now that the migration and model are both ready, let’s run
the migration to create the new table in the database. Run
the following command: Google Cloud SQL is a fully
managed relational database
sail artisan migrate
that supports MySQL, PostgreSQL,
You can verify that this command has created the table by and Microsoft SQL Server
checking your database.

Step 2.5: Run the App


The final step is to run the app and start using the edi- You can read the full documentation on Google Cloud SQL
tors’ route to store a few coding editors in the database here: https://cloud.google.com/sql.

codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 35
Figure 4: Google Cloud SQL

In this section, you’re going to create your first Cloud SQL The next step is to select which database engine you’re go-
and connect it from your Laravel app running on the GAE ing to create. For this series, stick with a MySQL database.
(Google App Engine). Figure 5 shows the database engine offerings by GCP.

Log into your account at https://console.cloud.google. Select the Choose MySQL button. Next, GCP prompts you
com/ and navigate to the Cloud SQL section by selecting it to fill in the configuration details that GCP needs to create
from the left-side menu. Figure 4 shows where to locate the your MySQL instance. Figure 6 shows the MySQL instance
Cloud SQL on the main GCP menu. configuration settings.

The GCP (Google Cloud Platform) takes you through a few At a minimum, you need to input the following fields:
steps to help you easily create a new instance. Let’s start!
• Instance ID
Step 1: Create a MySQL Instance • Password (for the root MySQL instance user). Make sure
Locate and click the CREATE INSTANCE button. Follow the you remember this password as you’ll need it later.
steps to create your first Cloud SQL instance. • Database version. I’ll stick with MySQL 5.7 for now.
• Region (preferably the same region you picked for the
GAE app)
• Zonal availability (either single or multiple zones, de-
pending on your requirements and needs)

The rest of the fields are optional. Look at them in case you
want to change anything.

Click the CREATE INSTANCE button. GCP starts creating the


instance and directs you to the Cloud SQL Dashboard upon
completion. Figure 7 shows the Cloud SQL Dashboard.

From here, you’ll start configuring the MySQL instance and


preparing it for connection from your Laravel app.

Step 2: Create a MySQL Database


On the gcp-app-database dashboard, locate and click the
Databases on the left-side menu. This page lists all the da-
Figure 5: GCP database engine offerings tabases you create under the MySQL Instance.

36 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 6: MySQL instance configuration settings

Figure 7: Cloud SQL dashboard

Click the CREATE DATABASE button to create a database for Locate and click the Connections on the left-side menu.
the Laravel app. Figure 8 shows the create database form. This page lists all the networking configurations that govern
your database instance.
Provide a name for the new database and click the CREATE
button. Just a few seconds later, you’ll see the new data- For now, keep selecting the Public API option. It allows you
base listed under the current instance list of databases. to connect to your database by using the Cloud SQL Proxy.

codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 37
Advertisement

WANT TO
LIVE ON
MAUI?
IF YOU CAN WORK FROM HOME,
WHY NOT MAKE PARADISE YOUR HOME?
The world has changed. Millions of people are working from home, and for many, that
will continue way past the current crisis. Which begs the question: If you can work from
home, then why not make your home in one of the world’s premiere destinations and
most desirable living areas?

The island of Maui in Hawai’i is not just a fun place to visit for a short vacation, but it
is uniquely situated as a place to live. It offers great infrastructure and a wide range of
things to do, not to mention a very high quality of life.

We have teamed up with CODE Magazine and Markus Egger to provide you information
about living in Maui. Markus has been calling Maui his home for quite some time, so
he can share his own experience of living in Maui and working from Maui in an industry
that requires great infrastructure.

For more information, and a list of available homes, visit www.Live-On-Maui.com


www.Live-On-Maui.com

Steve and Carol Olsen


Maui, Hawai’i
Advertisement

MAUI NO KA OI!
This loosely translates to “Maui is the best”. As someone who has been calling Maui

home for a while now, I can wholeheartedly confirm this. After having travelled the world,

and after having lived in a variety of places, I find Maui to be truly unique.

Most people know Maui is a place to go for a week on worth considering also). I like this area for its great qual-
vacation. And that is certainly great and very enjoyable. ity of housing, low crime, great weather, and the world’s
However, Maui is so much more! To me, Maui is the per- greatest beaches. I enjoy playing a round of golf or go-
fect mix that makes me feel like I am living on a tropi- ing to a great restaurant with friends. When I want to
cal island yet being a developed place with great infra- feel like I’m on vacation for an hour or two, I swing by
structure and great quality of life. A lot of this is true for one of the hotels for a snack at the pool bar. When I feel
the Hawaiian Islands in general. But while Oahu (with its like exercising, I ride my bike along the ocean or go for
capital of Honolulu) is essentially a big city with a lot of a hike into the jungle or across lava fields.
people that always reminds me of Southern California,
and while islands like Kauai or the Big Island of Hawai’i As we have been going through the COVID-19 crisis, it
are a bit too “back to the roots” for me, Maui is just per- has become more and more clear how great a location
fect. You can enjoy a great day at the beach or in nature, Maui is. For one, the warm weather and outdoor living
or you can go to a nice restaurant, the movies, or a con- have kept the COVID-19 numbers low, and the quality
cert. It’s the quality of life provided by a modern place in of life high. And while nobody wants to be hospitalized,
the Western world, paired with a tropical island paradise. it has been nice to know that we have better healthcare
here than other tropical locations. (Essentially the same
Maui has many unique advantages. There is no hurri- healthcare as anywhere else in the US.) While we also
cane season and no real rainy season. The weather is had to deal with a lockdown, and many restaurants and
nice year-round, especially on the south-side of the is- hotels have been closed, we have several places that
land. There are no dangerous animals. Not even mos- are not just open, but since everything happens out-
quitoes. How does 82-degree weather on a nice beach doors, many are perfectly safe to visit. I really can’t think
with a Mai Tai or Pina Colada sound? That’s Maui for you! of a better place to weather this pandemic than Maui.

As someone who works in the tech industry, good in- So yes: Maui No Ka Oi! To me, there isn’t another place
frastructure is important to me. After all, I need to work that even comes close. I have a long list of other places
as productively from Maui as I do when I am on the I enjoy visiting that are awesome too. Do I want to go
“mainland” (which is what we call the continental US to Bora Bora, Singapore, or many other great loca-
here in Hawai’i). I have a 300Mbit internet connection tions? Sure, I do! But what do you do in Bora Bora after
going to my home, and it is inexpensive. My connec- two weeks? Maui on the other hand is a great place to
tion to other parts of the world is better and less ex- set up a life and stay for good.
pensive here than it is on most main-land locations.
We have the same stores and supermarkets as every- I would love to see you on Maui in the future. Maybe
where else in the US. Schools are decent. The same we can share one of those Mai Tais on the beach. I rec-
is true for healthcare. Flight connections are great, ommend talking to Carol Olsen, who has been helping
and it is not at all difficult to travel from and to Maui. me with all my real estate needs in South Maui. Moving
to Maui has been the best decision of my life. I am sure
I live on the south-side of the island in an area called you would enjoy it too!
“Kihei”. Especially the southern parts of Kihei, known
as “Wailea” and “Makena”, are the areas I truly rec- Markus Egger
ommend to anyone (although there are other places Publisher, CODE Magazine
MAUI PROPERTIES

Wailea Homes Wailea Condos Wailea Land

Kihei Homes Kihei Condos Kihei Land

Makena Home Makena Condo Makena Land


Step 3: Use Cloud SQL Auth Proxy to Connect INSTANCE_CONNECTION_NAME is the connection name that
to the Database GCP provides, and you can locate it in Figure 7 under the
The Cloud SQL Auth proxy (https://cloud.google.com/sql/ Connection name field.
docs/mysql/sql-proxy) provides secure access to your Cloud
SQL instances without the need for Authorized networks or for To start the Cloud SQL Auth Proxy, run the following command:
configuring SSL. Download the Cloud SQL Auth Proxy version
that best fits in your environment by checking this resource: ./cloud_sql_proxy \
https://cloud.google.com/sql/docs/mysql/sql-proxy#install -instances=INSTANCE_CONNECTION_NAME=tcp:3306

You have multiple ways to use the Cloud SQL Auth Proxy. Replace the INSTANCE_CONNECTION_NAME with the real
You can start the Cloud SQL Auth proxy using TCP sockets, connection name.
Unix sockets, or the Cloud SQL Auth proxy Docker image. You
can read about how to use the Cloud SQL Auth Proxy here: Figure 9 shows the Cloud SQL Auth Proxy connected and
https://cloud.google.com/sql/docs/mysql/connect-admin- ready to establish a database connection to any database
proxy#tcp-sockets under the currently connected instance.

For this series, I’ll be using the TCP sockets connection. Use With TCP connections, the Cloud SQL Auth proxy listens
the following command to connect to the Cloud SQL instance: on localhost (127.0.0.1) by default. So when you specify
tcp:PORT_NUMBER for an instance, the local connection is at
./cloud_sql_proxy \ 127.0.0.1:PORT_NUMBER. Figure 10 shows how to connect to
-instances=INSTANCE_CONNECTION_NAME=tcp:3306 the db_1 using TablePlus (https://tableplus.com/).

Figure 8: Create a new MySQL database.

Figure 9: Cloud SQL Auth Proxy connection

42 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
I’ve highlighted the important fields that need your attention: On a terminal window, run the following command to re-
fresh the app configurations:
• Name: The name of the connection
• Host: The IP address of the server hosting the MySQL php artisan config:cache
database. In this case, it’s 127.0.0.1.
• User: The database user. In this case, you’re using the This command clears the old configurations and caches the
root user. new ones.
• Password: The user password. The root password that
you previously created for the MySQL instance. The app, when it runs, is now connected to the Cloud data-
• Database: The name of the database. In this case, it’s db_1. base. Hence, to run the Laravel migrations, you just need to
run the following command:
Click the Connect button to successfully connect to the
database. The database is still empty and you’ll fill it with php artisan migrate:fresh
Laravel tables in the next section.
Notice the use of php artisan rather than sail artisan. You
Step 4: Run Laravel Migrations on the New Database use the sail command only when interacting with the Sail
In Steps 2.4 and 2.5, you created a database migration and environment locally.
pushed the code to GitHub. By pushing the code, the GCP
Cloud Build Workflow runs and deploys a new version of the Switch back to TablePlus to see all the tables there. Figure
app. This means that your app on the GAE is now up to date, 11 shows all the tables running a fresh Laravel migration.
with the editors’ view up and running.

Before you test your view on GAE, let’s run the Laravel mi-
grations on the cloud database. While the Cloud SQL Auth
Proxy is running, switch to the app source code and apply
the following changes.

Locate and open the .env file and update the database sec-
tion as follows:

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=db_1
DB_USERNAME=root
DB_PASSWORD=

Make sure you replace the DB_HOST with 127.0.0.1. Also


replace the DB_DATABASE with db_1, the DB_USERNAME
with root, and, finally, set the DB_PASSWORD for the root
user. Figure 10: Connect to db_1 via Cloud SQL Auth Proxy

Figure 11: Tables created by running a fresh Laravel migration

codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 43
You have successfully prepared the database and are ready Now that you’ve configured all APIs and libraries, let’s connect
to accept new connections. the Laravel app that’s running on GAE to the cloud database.

Step 5: Enable GCP APIs and Permissions Step 6: Configure Laravel App on GAE to Use
Before you can connect to the cloud database from within the Cloud Database
the GAE, you need to enable a few libraries and add some It’s time to configure your app running inside GAE to con-
permissions. nect to this cloud database. Start by locating and opening
the \app.yaml file at the root folder of the app.
Locate the following APIs and enable them in this order:
Open the file and append the following settings under the
• Cloud SQL Admin API env_variables section:
• Google App Engine Flexible Environment
DB_DATABASE: db_1
DB_USERNAME: root
DB_PASSWORD:
In Google Cloud, before using DB_SOCKET: '/cloudsql/INSTANCE_CONNECTION_NAME'
a service, make sure to enable
Replace the INSTANCE_CONNECTION_NAME with the real con-
its related APIs and Libraries. nection name. Then, add a new section to the \app.yaml file:

beta_settings:
cloud_sql_instances: 'INSTANCE_CONNECTION_NAME'
To enable any API or Library on the GCP, on the left-side menu,
click the APIs and Services menu item. Then, once you’re on This enables GAE to establish a Unix domain socket with the
the APIs and Service page, click the Library menu item. Search cloud database. You can read more about connecting to the
for any API and enable it by clicking the ENABLE button. Cloud database from GAE here https://cloud.google.com/
sql/docs/mysql/connect-app-engine-flexible#php.
In addition to enabling the two APIs, you need to add the
role of Cloud SQL Client for the GAE Service Account under Commit and push your changes on GitHub. This triggers
IAM and Admin section. the Google Cloud Build workflow to run and deploy a new
version of the app. Wait until the GCP deploys your app,
On the left-side menu, locate and click the IAM and Admin then access the editors’ view by following this URL: https://
menu item. Click the pencil icon to edit the account that prismatic-grail-323920.appspot.com/editors.
ends with @appspot.gserviceaccount.com and that has
the name of App Engine default service account. Figure 12 This URL opens the coding editors’ view. Start adding a few
shows how to add the Cloud SQL Client role. coding editors to try out the database connection and make

Figure 12: Adding the Cloud SQL Client role on the App Engine default user

44 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 13: Editors’ view up and running on GCP

sure all is working smoothly. Figure 13 shows the Editors’ Listing 5: SetupController __invoke() method
view up and running on GCP.
public function __invoke(Request $request):
\Illuminate\Http\Response
You have successfully connected your Laravel app to the {
Google Cloud SQL database. Let’s move on and enhance the try {
Google Cloud Build workflow.
Log::debug('Starting: Run database migration');

Run Laravel Migrations inside // run the migration


Artisan::call('migrate',
the Cloud Build Workflow [
'--force' => true
When you’re deploying your app to GAE, there’s almost no ]
easy way to access the underlying Docker container and run );
your migrations. You need to automate this task as part of
the Cloud Build workflow. Log::debug('Finished: Run database migration');

} catch (\Exception $e)


One way to automate running Laravel migrations inside a {
Cloud Build workflow is the following: // log the error
Log::error($e);
• Add a new controller endpoint in your app that can
return response('not ok', 500);
run the Artisan migration command. You need to se- }
cure this endpoint by authenticating the request and
making sure it’s coming solely from GCP. There are return response('ok', 200);
ways to do so, as you will see later. }
• Add a Cloud Build step to issue a curl (https://gist.
github.com/joyrexus/85bf6b02979d8a7b0308) POST
request to the endpoint from within the Cloud Build sail artisan make:controller SetupController \
workflow. --invokable

Step 1: Add a Controller Endpoint to Run the Migrations An invokable controller is a single action controller that
Let’s start by adding a new invokable controller in Laravel contains an __invoke method to perform a single task.
by running the following command: Listing 5 shows the __invoke() function implementation.

codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 45
The function is simple. It calls the migrate command using The build step uses a container of the gcr.io/cloud-build-
the Artisan::call() function call. It also does some logging ers/gcloud Docker image to run a curl command on a new
to trace whether this task runs or fails. bash shell. To learn more about Google Cloud Build Steps,
check this resource https://cloud.google.com/build/docs/
The next step is to add a new route inside the file \routes\ build-config-file-schema.
web.php as follows:
The build step issues a POST curl request to a URL repre-
Route::post( sented by $_APP_BASE_URL. The Google Cloud Build sub-
'/setup/IXaOonJ3B7', stitutes this variable with an actual value when it runs
'\App\Http\Controllers\SetupController' the trigger. The value of this variable shall be the full app
); /setup/ URL. You can learn more about Google Cloud Build
substitution here: https://cloud.google.com/build/docs/
I’m adding a random string suffix to the /setup/ URL, try- configuring-builds/substitute-variable-values.
ing to make it difficult to guess this route path. One final
step is to locate and open the \app\Http\Middleware\ Step 3: Amend the Cloud Build Trigger to Pass Over
VerifyCsrfToken.php file. Then, make sure to enlist the the _APP_BASE_URL
/setup/ URL inside the $except array as follows: Visit the list of triggers under Google Cloud Build. Locate
the deploy-main-branch trigger and click to edit. Figure 14
protected $except = [ shows how to edit a Google Cloud Build trigger.
'setup/IXaOonJ3B7'
]; Once on the Edit Trigger page, scroll to the bottom, locate,
and click the ADD VARIABLE button. This prompts you to
This way, Laravel won’t do a CSRF token verification enter a variable name and value. Variable names should
(https://laravel.com/docs/8.x/csrf) when the /setup/ URL start with an underscore. To use this same variable inside
GCP requests it. the Google Cloud Build workflow, you need to prefix it with
a $ sign. At run time, when the trigger runs, GCP substi-
Step 2: Amend Google Cloud Build to Run Migrations tutes the variable inside the Build workflow with the value
Switch to \ci\cloudbuild.yaml and append a new Cloud Build you’ve assigned on the trigger definition. Figure 15 shows
step to invoke the /setup/ URL from within the Build workflow. the _APP_BASE_URL variable together with its value.
Listing 6 shows the build step to invoke the /setup/ URL.
Save the trigger and you’re ready to go!!

Listing 6: Invoke /setup/ inside Google Build file Step 4: Run and Test the Trigger
- name: 'gcr.io/cloud-builders/gcloud' Before running the trigger, let’s make a new migration, for
entrypoint: "bash" example, to add a new description column on the editors
args: table.
- "-c"
- |
RESPONSE=$(curl -o /dev/null -s -w "%{http_code}" \ Run the following command to generate a new migration
-d "" -X POST $_APP_BASE_URL) file:
if [ "200" != "$$RESPONSE" ];
then
echo "FAIL: migrations failed" sail artisan make:migration \
exit 1; add_description_column_on_editors_table
else
echo "PASS: migrations ran successfully"
fi Listing 7 shows the entire migration file.

Figure 14: Edit the Google Cloud Build trigger.

46 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Listing 7: Database migration file
class AddColumnDescriptionOnEditorsTable extends Migration
{ /**
/** * Reverse the migrations.
* Run the migrations. *
* * @return void
* @return void */
*/ public function down()
public function up() {
{ Schema::table('editors', function($table) {
Schema::table('editors', function($table) { $table->dropColumn('description');
$table->string('description', 255)->nullable(); });
}); }
} }

Figure 15: Adding a new trigger variable

The migration, when run, adds a new column on the editors Next time I see you, I’ll strengthen running the Laravel
table inside the database. migration by implementing an authentication layer with
Google Secret Manager.
Save your work by adding all the new changes to Git and
committing them to GitHub. This, in turn, triggers GCP to In addition, deployments can go wrong sometimes and so
run the associated Google Cloud Build workflow. you should have a backup plan. As you will see in the next
article, you can take a backup snapshot of the Cloud data-
Eventually, a new version of the app will be deployed on base every time you run the Build workflow. If anything goes
GCP. However, this time, the trigger will also POST to the / wrong, you can revert to an old backup of the database.
setup/ URL to run the Laravel migrations as part of running
the Build workflow. And much more… See you then!

You can check the database to make sure the Build workflow  Bilal Haidar
runs the Laravel migration and accordingly adds a new col- 
umn to the editors table.

So far, the /setup/ URL isn’t authenticating the request


coming and ensuring that it’s coming from GCP only. In the
next part of this series, I’ll explore one option to secure and
authenticate your requests by using Google Secret Manager
(https://cloud.google.com/secret-manager).

Conclusion
This article was a continuation of the previous one by con-
necting your Laravel app to a local database. The next step
is to create a Google Cloud database and connect the app
to it when running inside GAE. Finally, you enhanced the
Google Cloud Build workflow to run the Laravel migrations
as part of the Build workflow itself.

codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 47
ONLINE QUICK ID 2201061

Working with Apache Kafka


in ASP.NET 6 Core
Today’s enterprises need reliable, scalable, and high-performant distributed messaging systems for data exchange in real-time. There
are quite a few messaging systems out there, and Apache Kafka is one of them. It’s an open source, and versatile stream-processing
software that’s a high-throughput, low-latency messaging system for distributed applications, and it’s written in Java and Scala.

This article provides a deep dive on how to work with Apache • It can persist data for a particular period.
Kafka in ASP.NET 6 Core. • It has the ability to grow elastically with zero downtime.
• It offers support for replication, partitioning, and
If you’re to work with the code examples discussed in this ar- fault-tolerance.
ticle, you should have the following installed in your system:

• Visual Studio 2022 Why Should You Use Apache Kafka?


• .NET 6.0 Scalability. Apache Kafka is highly scalable. It supports high-
• ASP.NET 6.0 Runtime performance sequential writes and separates topics into parti-
Joydip Kanjilal • Apache Kafka tions to facilitate highly scalable reads and writes. This helps
• Java Runtime Environment (JRE) Kafka to enable multiple producers and consumers to read and
[email protected] • 7-zip write at the same time. Additionally, because Kafka is distrib-
Joydip Kanjilal is an MVP uted, you can scale up by adding new nodes to the cluster.
(2007-2012), software You can download and install 7-zip from here: https://
architect, author, and www.7-zip.org/download.html. High throughput. Throughput is a measure of the number of
speaker with more than
messages that arrive at a given point in time. Apache Kafka is
20 years of experience. You can download JRE from here: https://www.java.com/
capable of handling massive volumes of incoming messages at
He has more than 16 years en/download/.
a high velocity per second (around 10K messages per second
of experience in Microsoft
.NET and its related or a maximum request size of one million bytes per request,
If you don’t already have Visual Studio 2022 installed in your
technologies. Joydip has whichever comes first).
computer, you can download it from here: https://visualstudio.
authored eight books, microsoft.com/downloads/.
more than 500 articles, High performance. Apache Kafka can deliver messages at
and has reviewed more high speed and high volumes. It provides high throughput
You can download Apache Kafka from here: https://kafka.
than a dozen books. with low latency and high availability.
apache.org/downloads.
Highly reliable. Kafka is a fault-tolerant messaging system and is
adept at recovering from failures quickly. Kafka can replicate data
Take advantage of Apache Kafka and handle many subscribers. In Apache Kafka, the messages are
durable even after they have been consumed. This enables the
for high performance, scalable, Kafka Producer and Kafka Consumer to be available at different
and reliable messaging in real-time. times and increases resilience and fault tolerance. Kafka can load
balance consumers in the event of a failure. It is more reliable
than other messaging services such as RabbitMQ, AMQP, JMS, etc.

Introduction to Apache Kafka Low latency. Latency refers to the amount of time required
Streaming data refers to data constantly produced by hundreds to process each message. Apache Kafka can provide high
of data sources, which often transmits the data records concur- throughput with low latency and high availability.
rently. A streaming platform must manage this continual influx
of data while still processing it sequentially and progressively. Durability. Kafka messages are highly durable because Kafka
stores the messages on the disk, as opposed to in memory.
Kafka is a publish/subscribe messaging platform with built-in
support for replication, partitioning, fault tolerance, and bet-
ter throughput. It’s an excellent choice for applications that
Kafka vs. Traditional Messaging Systems
need large scale data processing. Kafka is mainly used to build Kafka differs from traditional messaging queues in several
real-time streaming data pipelines. Kafka incorporates fault- ways. Kafka retains a message after it has been consumed.
tolerant storage and stream processing capabilities to allow for Quite the opposite, competitor RabbitMQ deletes messages
the storage and analysis of historical and real-time data. immediately after they’ve been consumed.

Here’s the list of Apache Kafka features: RabbitMQ pushes messages to consumers and Kafka fetches
messages using pulling.
• It can publish and subscribe streams of data.
• It’s capable of handling a vast number of read/write Kafka can be scaled horizontally and traditional messaging
operations per second. queues can scale vertically.

48 Working with Apache Kafka in ASP.NET 6 Core codemag.com


Typical Use Cases
Here are some use cases for Kafka:

• Messaging: Kafka acts as a message broker. A mes-


sage broker enables applications, services, and sys-
tems to communicate with one another and exchange
information. It can decouple processing from data
producers and store, validate, organize, route, and
deliver messages to appropriate destinations.
• Application activity tracking: Kafka was originally
developed to address application activity tracking. You
can leverage Kafka to publish all events (user login,
user registration, time spent by a logged in user, etc.)
that occur in your application to a dedicated Kafka top-
ic. Then you can have consumers subscribe to the topics
and process the data for monitoring, analysis, etc.
• Log aggregation: You can publish logs to Kafka topics and
then aggregate and process them when needed. Kafka can
collect logs from various services and make them available
to the consumers in a standard format (JSON).
• Real-time data processing: Today’s applications need
data to be processed as soon as it’s available. IoT
applications also need real-time data processing.
• Operational metrics: Kafka can aggregate the
statistical data collected from several distrib-
uted applications and then produce central-
ized feeds of operational data.

Components of the
Apache Kafka Architecture
The Apache Kafka architecture is comprised
of the following components:

• Kafka Topic: A Kafka topic defines a


channel for the transmission of data.
When the producers publish messages to
the topics, the consumers read messages
from them. A unique name identifies a
topic pertaining to a Kafka cluster. There’s
absolutely no limit to the number of topics
you can create in a cluster.
• Kafka Cluster: A Kafka cluster comprises one or
more servers or Kafka brokers. For high availabil-
ity, a Kafka cluster typically contains many brokers,
each of them having its own partition. Because they’re
stateless, ZooKeeper (see later in this list) is used to
manage the cluster state.
• Kafka Producer: A Kafka producer serves as a data
source for one or more Kafka topics and is responsible
for writing, optimizing, and publishing messages to
those topics. A Kafka producer can connect to a Kafka
cluster through Zookeeper. Alternatively, it can con-
nect to a Kafka broker directly.
• Kafka Consumer: A Kafka consumer consumes data
through reading messages on the topics they’ve sub-
scribed to. Incidentally, each Kafka consumer belongs
to a particular consumer group. A Kafka consumer
group comprises related consumers who share a com-
mon task. Kafka sends messages to the consumers
within the group from different partitions of a topic.
• Kafka ZooKeeper: A Kafka Zookeeper manages and
coordinates the Kafka brokers in a cluster. It also no-
tifies producers and consumers in a Kafka cluster of
the existence of new brokers or the failure of brokers.

codemag.com Working with Apache Kafka in ASP.NET 6 Core 49


Figure 1: A high-level view of the components of the Kafka architecture

• Kafka Broker: A Kafka broker acts as a middleman be- Setting Up Apache Kafka
tween producers and consumers, hosting topics and First off, download the Apache Kafka setup file from the
partitions and enabling sending and receiving messag- location mentioned earlier. Now switch to the Downloads
es between them. The brokers in a typical production folder in your computer and install the downloaded files
Kafka cluster can handle many reads/writes per second. one by one. Kafka is available as a zip file, so you must
Producers and consumers don’t communicate directly. extract the archive to the folder of your choice. Assuming
Instead, they communicate using these brokers. Thus, if you’ve already downloaded and installed 7-zip and Java in
one of the producers or consumers goes down, the com- your computer, you can proceed with setting up and running
munications pipeline continues to function as usual. Apache Kafka.

Figure 1 illustrates a high-level Kafka architecture. A Kafka Now follow the steps outlined below:
cluster comprises one or more Kafka brokers. Although the
Producers push messages into the Kafka topics in a Kafka 1. Switch to the Kafka config directory in your computer.
broker, the Consumers pulls those messages off a Kafka It is D:\kafka\config in my computer.
topic. 2. Open the file server.properties.
3. Find and replace the line “log.dirs=/tmp/kafka-logs”
with “log.dirs=D:/kafka/kafka-logs”, as shown in Fig-
When Not to Use Kafka ure 2.
Despite being the most popular messaging platform and 4. Save and close the server.properties file.
having several advantages, you should not use Kafka in any 5. Now open the file zookeeper.properties.
of the following use cases: 6. Find and replace the line “dataDir=/tmp/zookeeper” with
“dataDir=D:/kafka/zookeeper-data”, as shown in Figure 3.
• Kafka’s not a good choice if you need your messages 7. Save and close the file
processed in a particular order. To process messages
in a specific order; you should have one consumer By default, Kafka runs on the default port 9092 in your
and one partition. Instead, in Kafka, you have mul- computer and connects to ZooKeeper at the default port
tiple consumers and partitions and so it isn’t an ideal 2181.
choice in this use case.
• Kafka isn’t a good choice if you only need to process a Switch to your Kafka installation directory and start Zoo-
few messages per day (maybe up to several thousand). keeper using the following command:
Instead, you can take advantage of traditional mes-
saging queues like RabbitMQ. .\bin\windows\zookeeper-server-start.bat
• Kafka is an overkill for ETL jobs when real-time pro- config\zookeeper.properties
cessing is required because it isn’t easy to perform
data transformations dynamically. Figure 4 is how it looks when Zookeeper is up and running
• Kafka is also not a good choice when you need a sim- in your system:
ple task queue. Instead, it would be best if you lever-
aged RabbitMQ here. Launch another command window and write the following
command in there to start Kafka:
Kafka isn’t a replacement for a database, and it should
never be used for long-term storage. Because Kafka stores .\bin\windows\
redundant copies of data, it might be a costly affair as well. kafka-server-start.bat
When you need data to be persisted in a database for query- config\server.properties
ing, insertion, and retrieval, you should use a relational da-
tabase like Oracle, SQL Server, or a non-relational database When Kafka is up and running in your system, it looks like
like MongoDB. Figure 5.

50 Working with Apache Kafka in ASP.NET 6 Core codemag.com


Figure 2: Setting up Apache Kafka. Specifying the logs directory

Figure 3: Setting up Apache Kafka: Specifying the data directory

Create Topic(s) You can list all topics in a cluster using the following com-
Now that Zookeeper and Kafka are both up and running, mand:
you should create one or more topics. To do this, follow the
steps outlined below: .\bin\windows\kafka-topics.bat --list
--zookeeper localhost:2181
Launch a new command prompt window. Type the following
command in there and press enter:
Working with Apache Kafka
kafka-topics.bat --create --zookeeper in ASP.NET Core 6
localhost:2181 --replication-factor 1 In this section, you’ll implement a simple Order Processing
--partitions 1 --topic test application. You’ll build two applications: the producer ap-

codemag.com Working with Apache Kafka in ASP.NET 6 Core 51


Figure 4: Zookeeper is up running at the default port 2181

Figure 5: Kafka is up and running at the default port 9092.

plication and the consumer application. Both of these ap- 3. Specify the project name as ApacheKafkaProducerDemo
plications will be created using ASP.NET 6 in Visual Studio and the path where it should be created in the “Config-
2022 IDE. ure your new project” window.
4. If you want the solution file and project to be cre-
Create a New ASP.NET 6 Project in Visual Studio 2022 ated in the same directory, you can optionally check
Let’s start building the producer application first. You can the “Place solution and project in the same directory”
create a project in Visual Studio 2022 in several ways. When checkbox. Click Next to move on.
you launch Visual Studio 2022, you’ll see the Start window. 5. In the next screen, specify the target framework and
You can choose “Continue without code” to launch the main authentication type as well. Ensure that the “Configure
screen of the Visual Studio 2022 IDE. for HTTPS,” “Enable Docker Support,” and the “Enable
OpenAPI support” checkboxes are unchecked because
To create a new ASP.NET 6 Project in Visual Studio 2022: you won’t use any of these in this example.
6. Click Create to complete the process.
1. Start the Visual Studio 2022 Preview IDE.
2. In the “Create a new project” window, select “ASP.NET Follow the same steps outlined above to create another ASP.
Core Web API” and click Next to move on. NET Core 6 Web API project. Name this project ApacheKafka-

52 Working with Apache Kafka in ASP.NET 6 Core codemag.com


Figure 6: Installing Confluent.Kafka NuGet Package

ConsumerDemo. Note that you can also choose any mean- Listing 1: Create a new API Controller
ingful name for both these projects. using Confluent.Kafka;
using Microsoft.AspNetCore.Mvc;
You now have two ASP.NET Core 6 Web API projects: Apache- using System;
KafkaProducerDemo and the ApacheKafkaConsumerDemo. using System.Net;
using System.Text.Json;
using System.Threading.Tasks;
Install NuGet Package(s) using System.Diagnostics;
So far so good. The next step is to install the necessary
NuGet Package(s). To produce and consume messages, you namespace ApacheKafkaProducerDemo.Controllers {
need a client for Kafka. Use the most popular client: Conflu- [Route("api/[controller]")]
[ApiController]
ent’s Kafka .NET Client. To install the required packages into public class ProducerController: ControllerBase {
your project, right-click on the solution and select “Manage private readonly string
NuGet Packages for Solution...”. Then type Confluent.Kafka bootstrapServers = "localhost:9092";
in the search box, select the Confluent.Kafka package, and private readonly string topic = "test";
install it. You can see the appropriate screen in Figure 7. [HttpPost]
public async Task <IActionResult>
Alternatively, you can execute the following command in Post([FromBody] OrderRequest orderRequest) {
the Package Manager Console: string message = JsonSerializer.Serialize
(orderRequest);
return Ok(await SendOrderRequest(topic, message));
PM> Install-Package Confluent.Kafka }
private async Task < bool > SendOrderRequest
Now you’ll create the classes and interfaces for the two ap- (string topic, string message) {
ProducerConfig config = new ProducerConfig {
plications. BootstrapServers = bootstrapServers,
ClientId = Dns.GetHostName()
Building the ApacheKafkaProducerDemo Application };
Create a class named OrderRequest in a file named Order-
try {
Request.cs with the following code in there: using(var producer = new ProducerBuilder
<Null, string> (config).Build()) {
namespace ApacheKafkaProducerDemo var result = await producer.ProduceAsync
{ (topic, new Message <Null, string> {
public class OrderRequest Value = message
});
{
public int OrderId { get; set; } Debug.WriteLine($"Delivery Timestamp:
public int ProductId { get; set; } {result.Timestamp.UtcDateTime}");
public int CustomerId { get; set; } return await Task.FromResult(true);
}
public int Quantity { get; set; } } catch (Exception ex) {
public string Status { get; set; } Console.WriteLine($"Error occured: {ex.Message}");
} }
}
return await Task.FromResult(false);
}
Create a new controller named ProducerController in the }
ApacheKafkaProducerDemo application with the code found }
in Listing 1 in it.

codemag.com Working with Apache Kafka in ASP.NET 6 Core 53


Building the ApacheKafkaConsumerDemo Application Next,you’ll create a hosted service to consume the mes-
Create a new class named OrderProcessingRequest in a file sages. Create a class named ApacheKafkaConsumerService
having the same name and a .cs extension with the follow- in another new file having the same name with a .cs exten-
ing content in it: sion, as found in Listing 2. This class should extend the
IHostedService interface.
namespace ApacheKafkaConsumerDemo
{ Register the Hosted Service
public class OrderProcessingRequest You should register the hosted service in the ConfigureSer-
{ vices method, as shown in the code snippet given below:
public int OrderId { get; set; }
public int ProductId { get; set; } public void ConfigureServices
public int CustomerId { get; set; } (IServiceCollection services)
public int Quantity { get; set; } {
public string Status { get; set; } services.AddSingleton
} <IHostedService,
} ApacheKafkaConsumerService>();

Figure 7: Send a POST request to the Producer application.

Figure 8: The breakpoint in the Producer application is hit

54 Working with Apache Kafka in ASP.NET 6 Core codemag.com


services.AddControllers(); they are part of different solutions. To run the application,
} follow these steps:

1. Execute the producer application.


Execute the Application 2. Execute the consumer application.
Set appropriate breakpoints in the source code of both ap- 3. Launch the Postman Http Debugger tool.
plications so that you can debug them. You should run the 4. Send an HTTP POST request to the producer API using
producer and the consumer applications separately because Postman, as shown in Figure 8.

Listing 2: Create an ApacheKafkaConsumer Service class


using Confluent.Kafka; var cancelToken =
using Microsoft.Extensions.Hosting; new CancellationTokenSource();
using System;
using System.Text.Json; try {
using System.Threading; while (true) {
using System.Threading.Tasks; var consumer =
using System.Diagnostics; consumerBuilder.Consume
(cancelToken.Token);
namespace ApacheKafkaConsumerDemo { var orderRequest =
public class ApacheKafkaConsumerService: JsonSerializer.Deserialize
IHostedService { <OrderProcessingRequest>
private readonly string (consumer.Message.Value);
topic = "test"; Debug.WriteLine($"Processing Order Id:
private readonly string {orderRequest.OrderId}");
groupId = "test_group"; }
private readonly string } catch (OperationCanceledException) {
bootstrapServers = "localhost:9092"; consumerBuilder.Close();
}
public Task StartAsync }
(CancellationToken cancellationToken) { } catch (Exception ex) {
var config = new ConsumerConfig { System.Diagnostics.Debug.WriteLine(ex.Message);
GroupId = groupId, }
BootstrapServers = bootstrapServers,
AutoOffsetReset = AutoOffsetReset.Earliest return Task.CompletedTask;
}; }
public Task StopAsync(CancellationToken
try { cancellationToken) {
using(var consumerBuilder = return Task.CompletedTask;
new ConsumerBuilder <Ignore, string> }
(config).Build()) { }
consumerBuilder.Subscribe(topic); }

Figure 9: Displaying the Order ID of the order being processed in the Output window

codemag.com Working with Apache Kafka in ASP.NET 6 Core 55


Figure 10: Displaying all messages in a topic

Figure 9 illustrates that the breakpoint has been hit in the --bootstrap-server
producer application. Note the Timestamp value displayed localhost:9092 --topic
in the Output window. test --from-beginning

When you press F5, the breakpoint set in the consumer ap-
plication will be hit and you can see the message displayed Where Should I Go from Here
in the Output window, as shown in Figure 10. Kafka is a natural choice if you’re willing to build an ap-
plication that needs high performant, resilient, and scal-
able messaging. This post walked you through building a
Kafka CLI Administration simple Kafka producer and consumer using ASP.NET 6. Note
In this section we’ll examine how we can perform a few ad- that you can set up Apache Kafka using Docker as well. You
ministration tasks in Kafka. can know more about Apache Kafka from the Apache Kafka
Documentation.
Shut Down Zookeeper and Kafka
To shut down Zookeeper, use the zookeeper-server-stop.bat  Joydip Kanjilal
script, as shown below: 

bin\windows\zookeeper-server-stop.bat

To shut down Kafka, you should use the kafka-server-stop.


bat script as shown below:

bin\windows\kafka-server-stop.bat

If you need a messaging system


that’s high-performant, resilient,
and scalability, you’ll be thrilled
with Apache Kafka.

Display All Kafka Messages in a Topic


To display all messages in a particular topic, use the follow-
ing command:

.\bin\windows\
kafka-console-consumer.bat

56 Working with Apache Kafka in ASP.NET 6 Core codemag.com


ONLINE QUICK ID 2201071

The Secrets of Manipulating CSV Files


It’s 2022 and one of the most common file types I deal with on nearly a daily basis is the CSV file. If you told me this just a few
short years ago, I would have told you: “The 80s called and they want their file format back.” And yet here we are in the second
decade of the 21st century and I DO deal with CSV more frequently than I would have ever expected. First, let’s discuss just what

CSV files are. Comma Separated Files (CSV) are text files that con- BoxOfficeGross = 123456 });
tain multiple records (rows), each with one or more elements (col-
umns) separated by a comma character. Actually, the elements movies.Add(new Movie () { Name =
can be separated by any type of delimiter, not only a comma. For "Empire Strikes Back",
instance, another common file format is the Tab Separated Value Director = "Irving Kirshner",
(TSV) file where every element is separated by a tab character. DateReleased = new DateTime(1977, 5, 23),
BoxOfficeGross = 123456 });

movies.Add(new Movie (){ Name =


The 80s called and they want "Return of the Jedi", Rod Paddock
Director = "Richard Marquand",
their file format back. [email protected]
DateReleased = new DateTime(1977, 5, 23),
BoxOfficeGross = 123456 Rod Paddock founded Dash
}); Point Software, Inc. in
So why is there this sudden demand for skills when dealing return movies; 2001 to develop high-
quality custom software
with CSV files? As The Dude would say: It’s the science, man. }
solutions. With 30+ years
And by that, I mean the Data Science. In our current state of
of experience, Rod’s current
development, we deal with huge quantities of data, and often
this data is shared between organizations. CSV files present Introducing CSVHelper and past clients include:
Six Flags, First Premier
a unique set of opportunities for sharing large quantities of A few years ago, my team began building a Data Analytics Bank, Microsoft, Calamos
data as they’re dense and contain little of the wasted content platform for our Data Scientists to use. The data was hosted Investments, The US Coast
that’s commonly found in JSON or XML files. They also com- in a platform called Snowflake that uses CSV files as a mech- Guard, and US Navy. Along
press rather nicely, which lowers bandwidth uses. Figure 1 anism for loading data into their cloud services. When this with developing software,
shows an example of a simple CSV file containing movie data. need arose, I did what all good developers do: I searched for Rod is a well-known author
a tool that would help me deal with CSV files. and conference speaker.
By the end of this article, you’ll be intimately familiar with Since 1995, Rod has given
this data. You’ll learn how to read, write, and format Visual This is where I came across a .NET library called CSVHelper. talks, training sessions,
Studio files based on this data. This open-source tool, written by developer Josh Close (any and keynotes in the US,
many others), is simple to use yet powerful enough to deal Canada, and Europe. Rod
with many data types of CSV scenarios that have presented has been Editor-in-Chief of
Movie Data Sample themselves over the years. CODE Magazine since 2001.
As stated above, this article will be all about reading and
writing movie data formatted in various CSV formats. The Getting up and running with CSVHelper is simple. The fol-
following class code represents the data: lowing steps demonstrate how to bootstrap a .NET applica-
tion capable of manipulating CSV files.
public class Movie
{ Bootstrapping CSVHelper
public string Name { get; set; } = ""; There are only two step to bootstrapping CSVHelper:
public string Director { get; set; } = "";
public DateTime DateReleased { get; set; } 1. Create a new Console Application
public decimal 2. Install CSVHelper via the NuGet Package Manager Con-
BoxOfficeGross { get; set; } = 0.0m; sole using the following command:
}
Install-Package CsvHelper
public static List<Movie> GetMovies()
{ Now you’re ready to begin manipulating CSV files.
var movies = new List< Movie >();

movies.Add(new Movie (){Name =


"American Graffiti",
Director = "George Lucas",
DateReleased = new DateTime(1977,5,23),
BoxOfficeGross = 123456});

movies.Add(new Movie () { Name = "Star Wars",


Director = "George Lucas",
DateReleased = new DateTime(1977, 5, 23), Figure 1: Sample movie data in CSV format

codemag.com The Secrets of Manipulating CSV Files 57


Writing CSV Files When you examine this code, take notice of the following items:
Once you’ve created your basic project, you can start by out-
putting a collection of data to a CSV file. The following code • The code creates a CSVConfiguration object. This ob-
demonstrates the simplest mechanism for writing a collec- ject will be used to control the output of your CSV file.
tion of movie records to a CSV file. • The file opens a StreamWriter that controls where your
file will be written.
public static void WriteCsvFile( • The code then creates a CSVWriter object passing in
List<Movie> dataToWrite, string outputFile) the configuration object. This Writer sends your data
{ to the stream opened by the writer using the passed-
var config = in configuration settings.
new CsvConfiguration( • Finally, the call to WriteRecords routine takes an IEnu-
CultureInfo.InvariantCulture); marable collection and writes to the CSV fille.

using (var writer = The output of this set of code can be found in Figure 2.
new StreamWriter(outputFile))
using (var csv =
new CsvWriter(writer, config)) Configuring Writer Options
{ As stated earlier, the CSVWriter accepts a configuration ob-
csv.WriteRecords(dataToWrite); ject that’s used to control output options. A few of the key
} options will be covered next.
}
Header Column
You may or may not want to include a header file in your
CSV files. By default, CSVHelper adds the name of your class’
properties in a header row. You can turn off the header by
setting it with the following code:

config.HasHeaderRecord = false;

Figure 3 shows the results of this option.

Changing Delimiters
Figure 2: Movie Data Output as CSV file. One of the more common options is the delimiter used between
each data element. By default, CSVHelper delimits data comma
characters. The following three examples show how you can
change the delimiter to the PIPE, TAB, and a “crazy” delimiter.

• Changing the delimiter to PIPE:

config.Delimiter = "|";

Figure 4 shows the PIPE delimiter in action.

Figure 3: A CSV file with no header • Changing the delimiter to TAB:

config.Delimiter = "\t";

Figure 5 shows the TAB delimiter in action.

• Creating a “Crazy” delimiter (This is just to demon-


strate that your delimiter can be anything you desire):

config.Delimiter = "[[YES_IM_A_DELIMETER]]";

Figure 6 shows the “Crazy” delimiter doing its thing.


Figure 4: The CSV file with PIPE delimiter
Quote Delimiting
I’ve found in many situations that my data needs to have
each data element wrapped in quotation marks. This is
especially true when your data contains delimiters within
their fields, e.g., commas. CSVHelper allows you to quote-
delimit your data using the following options.

config.ShouldQuote = args => true;

Figure 5: The CSV file with TAB delimiter Figure 7 shows the CSV with quoted content.

58 The Secrets of Manipulating CSV Files codemag.com


Formatting Output with Map Classes
Another very handy tool is the ability to control the output Figure 8 shows the CSV file with two columns.
sent to your file. By default, CSVHelper outputs elements by
reflecting on the class they come from and creating columns You can also use a class map to reorder your output
for each property. There are many situations where you may
want to export a limited set of properties or you wish to public class MovieOutputClassMap
change the order of the output files. : ClassMap<Movie>
{
This is where mapping classes come in. When exporting data public MovieOutputClassMap()
CSVHelper can accept a mapping object derived from the {
ClassMap class. The following code demonstrates a ClassMap Map(m => m.Name);
that limits the data exported to two properties. Map(m => m.DateReleased);
Map(m => m.Director);
public class MovieOutputClassMap Map(m => m.BoxOfficeGross);
: ClassMap<Movie> }
{ }
public MovieOutputClassMap()
{ Figure 9 shows the CSV file with its columns reordered.
Map(m => m.Name);
Map(m => m.DateReleased); Along with altering the number of columns exported and
changing the ordinal position of them, you can also control
} the text that’s emitted into the CSV stream. Altering the
} output (and input) is done using a class that implements
the ITypeConverter interface.
Once you’ve built your class map, you need to apply it to
your writer. This is done using two commands. The first one The code below demonstrates creating a type converter that
creates an instance of your class map. alters the output of the DateReleased property removing the
time component.
var classMap = new MovieOutputClassMap();
This code receives the property’s value and returns a string
The second registers it with the writer Context property: using the ConvertToString aspect of the type converter.
There’s also a corollary for reading these values from strings
csv.Context.RegisterClassMap(classMap); via an implementation of the ConvertFromString function.

The full writer code is shown below:

public static void


WriteCsvFile(List<Movie> dataToWrite,
string outputFile)
{
var config =
new CsvConfiguration(CultureInfo.InvariantCulture);

//include header Figure 6: The CSV file with the CRAZY delimiter
config.HasHeaderRecord = false;

//change delimiter
config.Delimiter = "|";

//quote delimit
config.ShouldQuote = args => true;

//changing the order of fields


var classMap = new MovieOutputClassMap(); Figure 7: The CSV file with quoted content

using (var writer =


new StreamWriter(outputFile))

using (var csv =


new CsvWriter(writer, config))
{
csv.Context.RegisterClassMap(classMap);
csv.WriteRecords(dataToWrite);
}
} Figure 8: CSV file with only two columns exported

codemag.com The Secrets of Manipulating CSV Files 59


Figure 9: The CSV file columns reordered

When you examine this set of code for reading files, take
notice of the following items:

• The code creates a CSVConfiguration object. This ob-


ject is used to control how the reader manipulated
your CSV data as it was read.
• The file opens a StreamReader, which controls where
Figure 10: The CSV file with the date formatting altered your file will be read from.
• The code then creates a CSVReader object passing in
the configuration object. This reader is used to iterate
public class DateOutputConverter : ITypeConverter through your CSV file one record at a time.
{ • The code iterates the file using the Read() function,
public object ConvertFromString(string text, which moves down the file one record at a time. Note
IReaderRow row, MemberMapData memberMapData) that the code does a Read() immediately, to skip the
{ record header.
throw new NotImplementedException(); • Finally, the code uses various Getter functions to read
} data from each column.
public string ConvertToString(
object value, public static List<Movie>
IWriterRow row, ManualReadCsvFile(string inputFile)
MemberMapData memberMapData) {
{
var retval = var retval = new List<Movie>();
((DateTime) value).ToString("d"); var config = new CsvConfiguration(
return retval; CultureInfo.InvariantCulture);
}
}
using (var reader =
Once you’ve created your converter, you attach it to your new StreamReader(inputFile))
column via the mapping class. The following code shows
how to attach a converter to a property map. using (var csv =
new CsvReader(reader, config))
public class MovieOutputClassMap : {
ClassMap<Movie> //skip the header
{ csv.Read();
public MovieOutputClassMap() while (csv.Read())
{ {
Map(m => m.Name); var movie = new Movie();
Map(m => m.DateReleased).TypeConverter(new movie.Name = csv.GetField(0);
DateOutputConverter()); movie.Director = csv.GetField(1);
Map(m => m.Director); movie.DateReleased
Map(m => m.BoxOfficeGross); =csv.GetField<DateTime>(2);
} movie.BoxOfficeGross
=csv.GetField<decimal>(3);
retval.Add(movie);
Figure 10 shows the CSV file with the date formatting al- }
tered. }
return retval;
}
Reading CSV Files
Now that you have a basic understanding of writing CSV Another and much simpler way to read a file is to use
files, you can turn your sights to reading CSV files. There CSVHelper’s built-in mechanism for iterating through a
are two primary mechanisms for reading a file. The first is file automatically transforming CSV records into to .NET
to open the file and iterate through it one record at a time. classes.

60 The Secrets of Manipulating CSV Files codemag.com


When you examine this set of code for reading files, take public static List<Movie> ReadCsvFile(
notice of the following items: string inputFile)
{
• The code creates a CSVConfiguration object. This ob- var retval = new List<Movie>();
ject is used to control how the reader manipulated var config =
your CSV data as it was read. new CsvConfiguration(CultureInfo.InvariantCulture);
• The file opens a StreamReader, which controls where
your file will be read from. var classMap = new MovieInputClassMap();
• The code then creates a CSVReader object passing in
the configuration object. This reader is used to iterate using (var reader =
through your CSV file one record at a time. new StreamReader(inputFile))
• The code then reads all the records using the
GetRecords<T> method. This function returns an IEnu- using (var csv =
merable collection. new CsvReader(reader, config))
• The collection is then added to the functions return {
value via the AddRange() method. csv.Context.RegisterClassMap(classMap);
retval.AddRange(csv.GetRecords<Movie>());
public static List<Movie> ReadCsvFile( }
string inputFile)
{ return retval;
var retval = new List<Movie>(); }
var config =
new CsvConfiguration
(CultureInfo.InvariantCulture); Conclusion
As you can see, using CSVHelper greatly simplifies the pro-
using (var reader =new StreamReader(inputFile)) cess of reading and writing CSV files. This library has a good
using (var csv = balance of simple-to-use yet very capable tools. I highly rec-
new CsvReader(reader, config)) ommend exploring more capabilities of this object.
{
retval.AddRange(csv.GetRecords<Movie>()); So now you may be asking yourself how you can exploit
} these tools and techniques in the Data Science space. Well,
return retval; that’s where the next article comes in. In a future article,
} I’ll demonstrate streaming this data into Snowflake via an
S3 bucket and how to bulk-load data into Postgres using the
As you can see, this style of code is much simpler to deal with. same tools. Thanks for exploring the non-glamorous world
of CSV files with me.

 Rod Paddock
Simple code is always better, 

both while you’re writing it and


when you come back to it later.

You can also use class maps to change the order of how CSV
elements are read from your CSV file and are applied to the
returned object’s properties. The following class map reads
content from the CSV created earlier in this article. Notice
the column order.

public class MovieInputClassMap : ClassMap<Movie>


{
public MovieInputClassMap()
{
Map(m => m.Name);
Map(m => m.DateReleased);
Map(m => m.Director);
Map(m => m.BoxOfficeGross);
}
}

The code used to attach a class map is exactly like the writer.
You simply create an instance of the class map and apply it
to the CSVReader’s Context property:

codemag.com The Secrets of Manipulating CSV Files 61


ONLINE QUICK ID 2201081

Minimal APIs in .NET 6


Building REST APIs has become central to many development projects. The choice for building those projects is wide, but
if you’re a C# developer, the options are more limited. Controller-based APIs have been the most common for a long time,
but .NET 6 changes that with a new option. Let’s talk about it.

How’d We Get Here?


Connecting computers has been a problem since the first       return Ok(_mapper
steps of distributed computing some fifty years ago (see Fig-         .Map<IEnumerable<OrderViewModel>>(
ure 1). Yeah, that makes me feel old too. Remote Procedure           results));
Calls were as important as APIs in modern development. With     }
REST, OData, GraphQL, GRPC, and the like, we have a lot of     catch (Exception ex)
options to create ways of communicating between apps.     {
      _logger.LogError($"Failed : {ex}" );
Although lots of these technologies are thriving, using REST       return BadRequest($"Failed");
Shawn Wildermuth as a way of communicating is still a stalwart in today’s de-     }
[email protected] velopment world. Microsoft has had a number of solutions   }
wildermuth.com for creating REST APIs over the years, but for the past de- ...
twitter.com/shawnwildermut cade or so, Web API has been the primary tool. Based on the
ASP.NET MVC framework, Web API was meant to treat REST This code is typical of Web API Controllers. But does that
Shawn Wildermuth has
been tinkering with com- verbs and nouns as first-class citizens. Being able to create mean it’s bad? No. For larger APIs and ones that have ad-
puters and software since a class that represents a surface area (often tied to a “noun” vanced needs (e.g., rich authentication, authorization, and
he got a Vic-20 back in the in the REST sense) that’s tied together with a routing library versioning), this structure works great. But for some proj-
early ’80s. As a Microsoft is still a viable way to build APIs in today’s world. ects, a simpler way to build APIs is really needed. Some of
MVP since 2003, he’s also this pressure is coming from other frameworks where build-
involved with Microsoft One of the drawbacks to the Web API framework (e.g., Con- ing APIs feels smaller, but it’s also driven by wanting to be
as an ASP.NET Insider and trollers), is that there’s a bit of ceremony involved for small able to design/prototype APIs more quickly.
ClientDev Insider. He’s APIs (or microservices). For example, Controllers are classes
the author of over twenty that represent one or more possible calls to an API: This need isn’t all that new. In fact, the Nancy framework
Pluralsight courses, written (https://github.com/NancyFx/Nancy) was a C# solution for
eight books, an interna- [Route("api/[Controller]")] mapping APIs calls way back then (although it’s deprecated
tional conference speaker, [Authorize(AuthenticationSchemes = now). Even newer libraries like Carter (https://github.com/
and one of the Wilder   JwtBearerDefaults.AuthenticationScheme)] CarterCommunity/Carter) are trying to accomplish the same
Minds. You can reach public class OrdersController : Controller thing. Having efficient and simple ways to create APIs is a
him at his blog at { necessary technique. You shouldn’t take Minimal APIs as the
http://wildermuth.com.   readonly IDutchRepository _repository; “right” or “wrong” way to build APIs. Instead, you should
He’s also making his first,   readonly ILogger<OrdersController> _logger; see it as another tool to build your APIs.
feature-length documentary
  readonly IMapper _mapper;
about software developers
  readonly UserManager<StoreUser> _userManager; Enough talk, let’s dig into how it works.
today called “Hello World:
The Film.” You can see
  public OrdersController(
more about it at
http://helloworldfilm.com.     IDutchRepository repository, What Are Minimal APIs?
    ILogger<OrdersController> logger, The core idea behind Minimal APIs is to remove some of the
    IMapper mapper, ceremony of creating simple APIs. It means defining lambda
    UserManager<StoreUser> userManager) expressions for individual API calls. For example, this is as
  { simple as it gets:
    _repository = repository;
    _logger = logger; app.MapGet("/", () => "Hello World!");
    _mapper = mapper;
    _userManager = userManager; This call specifies a route (e.g., “/”) and a callback to execute
  } once a request that matches the route and verb are matched.
The method MapGet is specifically to map a HTTP GET to the
  [HttpGet] callback function. Much of the magic is in the type inference
  public IActionResult Get( that’s happening. When we return a string (like in this ex-
    bool includeItems = true) ample), it’s wrapping that in a 200 (e.g., OK) return result.
  {
    try How do you even call this? Effectively, these mapping meth-
    { ods are exposed. They’re extension methods on the IEnd-
      var username = User.Identity.Name; pointRouteBuilder interface. This interface is exposed by
the WebApplication class that’s used to create a new Web
      var results = _repository server application in .NET 6. But I can’t really dig into this
        .GetOrdersByUser(username, without first talking about how the new Startup experience
                         includeItems); in .NET 6 works.

62 Minimal APIs in .NET 6 codemag.com


The New Startup Experience
A lot has been written about the desire to take the boil-
erplate out of the startup experience in C# in general. To
this end, Microsoft has added something called “Top Level
Statements” to C# 10. This means that the program.cs that
you rely on to start your Web applications don’t need a void
Main() to bootstrap the app. It’s all implied. Before C# 10,
a startup looked something like this:

using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;

namespace Juris.Api
{
public class Program Figure 1: History of APIs
{
public static void Main(string[] args)
{ Routing
CreateHostBuilder(args).Build().Run(); The first thing you might notice is that the pattern for
} mapping API calls looks a lot like MVC Controllers’ pattern
matching. This means that Minimal APIs look a lot like con-
public static IHostBuilder troller methods. For example:
CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args) app.MapGet("/api/clients", () => new Client()
.ConfigureWebHostDefaults(webBuilder => {
{   Id = 1,
webBuilder.UseStartup<Startup>();   Name = "Client 1"
}); });
}
} app.MapGet("/api/clients/{id:int}",
(int id) => new Client()
The need for a class and a void Main method that bootstraps {
the host to start the server is how we’ve been writing ASP.NET   Id = id,
in the .NET Core way for a few years now. With top-level state-   Name = "Client " + id
ments, they want to streamline this boilerplate, as seen below: });

var builder = WebApplication.CreateBuilder(args);


Simple paths like the “/api/clients” point at simple URI
paths, whereas using the parameter syntax (even with
constraints) continues to work. Notice that the callback
// Setup Services
can accept the ID that’s mapped from the URI just like MVC
Controllers. One thing to notice in the lambda expression is
var app = builder.Build();
that the parameter types are inferred (like most of C#). This
means that because you’re using a URL parameter (e.g., id),
// Add Middleware
you need to type the first parameter. If you didn’t type it, it
would try to guess the type in the lambda expression:
// Start the Server
app.Run();
app.MapGet("/api/clients/{id:int}",
  (id) => new Client()
Instead of a Startup class with places to set up services and
{
middleware, it’s all done in this very simple top-level pro-
  Id = id, // Doesn't Work
gram. What does this have to do with Minimal APIs? The   Name = "Client " + id
app that the builder object builds supports the IEndpoint- });
RouteBuilder interface. So, in our case, the set up to the
APIs is just the middleware: This doesn’t work because without the hint of type, the first
parameter of the lambda expression is assumed to be an
var builder = WebApplication.CreateBuilder(args); instance of “HttpContext”. That’s because, at its lowest
level, you can manage your own response to any request
// Setup Services with the context object. But for most of you, you’ll use the
parameters of the lambda expression to get help in mapping
var app = builder.Build(); objects and parameters.
// Map APIs Using Services
app.MapGet("/", () => "Hello World!"); So far, the APIs calls you’ve seen aren’t anything like real
world. In most of those cases, you want to be able to use
// Start the Server common services to execute calls. This brings me to how to
app.Run(); use Services in Minimal APIs. You may have noticed earlier
that I’d left a space to register services before I built the
Let’s talk about the individual features here. WebApplication:

codemag.com Minimal APIs in .NET 6 63


var bldr = WebApplication.CreateBuilder(args); app.MapPut("/clients/{id}",
  async (int id,
// Register Services Here          ClientModel model,
         IJurisRepository repo) =>
var app = bldr.Build();   {
// ...
You can just use the builder object to access the services, like so:   });

var bldr = WebApplication.CreateBuilder(args); For other verbs, you need to handle mapping of other verbs
using MapMethods:
// Register Services
bldr.Services.AddDbContext<JurisContext>();
bldr.Services.AddTransient<IJurisRepository, app.MapMethods("/clients", new [] { "PATCH" },
                           JurisRepository>();   async (IJurisRepository repo) => {
    return await repo.GetClientsAsync();
var app = bldr.Build();   });

Here you can see that you can use the Services object on Notice that the MapMethods method takes a path, but also
the application builder to add any services you need (in this takes a list of verbs to accept. In this case, I’m executing
case, I’m adding an Entity Framework Core context object this lambda expression when a PATCH verb is received. Al-
and a repository that I’ll use to execute queries. To use though you’re creating APIs separately, most of the same
these services, you can simply add them to the lambda ex- code that you’re familiar with will continue to work. The
pression parameters to use them: only real change is how the plumbing finds your code.

Using HTTP Status Codes


app.MapGet("/clients",
In these examples, so far, you haven’t seen how to handle
  async (IJurisRepository repo) => {
    return await repo.GetClientsAsync(); different results of an API action. In most of the APIs I
  }); write, I can’t assume that it succeeds, and throwing excep-
tions isn’t the way that I want to handle failure. To that end,
By adding the required type, it will be injected into the lambda you need a way of controlling what status codes to return.
expression when it executes. This is unlike Controller-based APIs These are handled with the Results static class. You simply
in that dependencies are usually defined at the class level. These wrap your result with the call to Results and the status code:
injected services don’t change how services are handled by the
service layer (i.e., Minimal APIs still create a scope for scoped app.MapGet("/clients",
services). When you’re using URI parameters, you can just add   async (IJurisRepository repo) => {
the services required to the other parameters. For example:     return Results.Ok(
await repo.GetClientsAsync());
app.MapGet("/clients/{id:int}",   });
  async (int id, IJurisRepository repo) => {
    return await repo.GetClientAsync(id); Results supports most status codes you’ll need, like:
  });
• Results.Ok: 200
This requires you think about the services you require for • Results.Created: 201
each API call separately. But it also provides the flexibility • Results.BadRequest: 400
to use services at the API level. • Results.Unauthorized: 401
• Results.Forbid: 403
Verbs • Results.NotFound: 404
So far, all I’ve looked at are HTTP GET APIs. There are meth- • Etc.
ods for the different types of verbs. These include:
In a typical scenario, you might use several of these:
• MapPost
• MapPut
app.MapGet("/clients/{id:int}",
• MapDelete
  async (int id, IJurisRepository repo) => {

These methods work identically to the MapGet method. For


    try {
example, take this call to POST a new client:       var client = await repo.GetClientAsync(id);
      if (client == null)
app.MapPost("/clients",
{
  async (Client model,
         IJurisRepository repo) => return Results.NotFound();
  { }
// ...       return Results.Ok(client);
  });     }
    catch (Exception ex)
Notice that the model in this case doesn’t need to use at-     {
tributes to specify FromBody. It infers the type if the shape     return Results.BadRequest("Failed");
matches the type requested. You can mix and match all of     }
what you might need (as seen in MapPut):   });

64 Minimal APIs in .NET 6 codemag.com


If you’re going to pass in a delegate to the MapXXX classes, you
can simply have them return an IResult to require a status code: await repo.GetClientsAsync());
  }).AllowAnonymous();
app.MapGet("/clients/{id:int}", HandleGet);
In this way, you can mix and match authentication and au-
async Task<IResult> HandleGet(int id, thorization as you like.
IJurisRepository repo)
{ Using Minimal APIs without Top-Level Functions
  try This new change to .NET 6 can come to a shock to many
  { of you. You might not want to change your Program.cs to
    var client = await repo.GetClientAsync(id); use Top-Level Functions for all your projects, but can you
    if (client == null) return Results.NotFound(); still use Minimal APIs without having to move to Top-Level
    return Results.Ok(client); Functions. If you remember, earlier in the article I men-
  } tioned that most of the magic of Minimal APIs comes from
  catch (Exception) the IEndpointRouteBuilder interface. Not only does the
  { WebApplication class support it, but it’s also used in the
    return Results.BadRequest("Failed"); traditional Startup class you may already be using. When
  } you call UseEndpoints, the delegate you specify there pass-
} es in an IEndpointRouteBuilder, which means you can just
call MapGet:
Notice that because you’re async in this example, you need
to wrap the IResult with a Task object. The resulting re- public void Configure(IApplicationBuilder app, SPONSORED SIDEBAR:
turn is an instance of IResult. Although Minimal APIs are IWebHostEnvironment env)
meant to be small and simple, you’ll quickly see that, prag- { Get .NET 6 Help for Free
matically, APIs are less about how they’re instantiated and if (env.IsDevelopment())
more about the logic inside of them. Both Minimal APIs and { How does a FREE hour-
Controller-based APIs work essentially the same way. The app.UseDeveloperExceptionPage(); long CODE Consulting
plumbing is all that changes. } virtual meeting with our
expert consultants sound?
Securing Minimal APIs app.UseRouting(); Yes, FREE. No strings. No
Although Minimal APIs work with Authentication and Autho- commitment. No credit
rization middleware, you may still need a way to specifying, cards. Nothing to buy.
app.UseAuthorization();
on an API-level, how security should work. If you’re coming For more information,
from Controller-based APIs, you might use the Authorize visit www.codemag.com/
app.UseEndpoints(endpoints =>
attribute to specify how to secure your APIs, but without consulting or email us at
{
controllers, you’re left to specify them at the API level. You [email protected].
endpoints.MapGet("/clients",
do this by calling methods on the generated API calls. For async (IJurisRepository repo) =>
example, to require authorization: {
return Results.Ok(
app.MapPost("/clients", await repo.GetClientsAsync());
  async (ClientModel model, }).AllowAnonymous();
         IJurisRepository repo) => });
  { }
// ...
  }).RequireAuthorization(); Although I think that Minimal APIs are most useful for green-
field projects or prototyping projects, you can use them in
This call to RequireAuthorization is tantamount to using your existing projects (assuming you’ve upgraded to .NET 6).
the Authorize filter in Controllers (e.g., you can specify
which Authentication scheme or other properties you need).
Let’s say you’re going to require authentication for all calls:
Where Are We?
Hopefully, you’ve seen here that Minimal APIs are a new way
to build your APIs without much of the plumbing and cer-
bldr.Services.AddAuthorization(cfg => { emony that are involved with Controller-based APIs. At the
  cfg.FallbackPolicy = same time, I hope you’ve seen that as complexity increases,
    new AuthorizationPolicyBuilder() Controller-based APIs have benefits as well. I see Minimal
      .RequireAuthenticatedUser() APIs as a starting point for creating APIs and, as a project
      .Build(); matures, I might move to Controller-based APIs. Although
}); it’s very new, I think Minimal APIs are a great way of creat-
ing your APIs. The patterns and best practices about how
You’d then not need to add RequireAuthentication on to use them will only get answered in time. I hope you can
every API, but you could override this default by allowing contribute to that conversation!
anonymous for other calls:
If you’d like a copy of the code for this article, please visit:
app.MapGet("/clients", https://github.com/shawnwildermuth/codemag-minimalapis.
  async (IJurisRepository repo) =>
  {  Shawn Wildermuth
    return Results.Ok( 

codemag.com Minimal APIs in .NET 6 65


ONLINE QUICK ID 2201091

Simplest Thing Possible: Tasks


Despite all of the time that Tasks and the Task Parallel Library have been in .NET, their capabilities are still underutilized. For the
record, Tasks were introduced in .NET 4, released in February 2010, nearly 12 years ago! That’s a few lifetimes in the I/T world. For
long-time CODE Magazine readers, you may recall my Simplest Thing Possible (STP) series that ran periodically from 2012 to 2016.

Past hits include three issues on Dynamic Lambdas, Prom- the synchronous programming of the past. Microsoft made
ises in JavaScript, NuGet, and SignalR. I still get pinged on this task much easier when it introduced some “syntactic
the Dynamic Lambda work—an oldie, but a very much rel- sugar” to make things easier. In other words, the .NET com-
evant goodie! piler undertakes the heavy lifting to transform your C# or
VB.NET into the necessary IL (Intermediate Language) code
I was a bit dumbfounded to realize that I never did an STP to support async calls.
on Tasks! Better late than never! The STP series had and has
one goal: to get you up and running on a .NET feature as
quickly as possible so you can take it to the next level for
John V. Petersen your context and use-cases. As .NET evolves and its history Sometimes, what’s old is new!
[email protected] grows, as well as the numerous blog posts with opinions—
linkedin.com/in/johnvpetersen some good, some not, some dogmatic, some neutral—more
than ever, it has become important to cut through the
Based near Philadelphia,
noise. In most cases, the official Microsoft Documentation An async call, as the name implies, means that the code
Pennsylvania, John is an
is the best source of raw, unopinionated information. making the call does not wait for the result. And yet, the
attorney, information
associated keyword to async is await, which can be con-
technology developer,
consultant, and author. Let’s get to it! fusing because it implies that await means that the code
waits, which contradicts async programming! Instead, what
it means is that the calling code continues its work while it
What Is a Task? waits for (or awaits) the async code to complete.
Think of a Task as a unit of work that is to be executed
sometime in the future. I’ve added the emphasis on future Before async/await, you needed to implement callbacks,
because that’s what a Task is: a future. An oft-asked ques- which meant that you had to pass a reference of the func-
tion in the various online forums (Stack Overflow, etc.) is tion that was to receive the async function’s result. Async
whether a Task is a promise in the same way promises are calls often took a bit to understand if the only world you
implemented in JavaScript. Promises are supported in .NET were familiar with was synchronous calls where the code
through the TaskCompletionSource Class, which is beyond would wait, and wait, and continue to wait until either the
the scope of this article. call completed or timed-out. This often made for a very un-
pleasant user experience because the client, typically a user
In the meantime, consider a Task as a future unit of work interface, wouldn’t update. The UI and the whole app ap-
that may exist on its own or in the context of TaskComple- peared to be frozen, and then after some time, everything
tionSource. When in the context of TaskCompletionSource, magically came back to life!
a Task participates in fulfilling a promise. A promise doesn’t
guarantee that the operation is successful. What’s guaran- Async/await allows for calls to be non-blocking, meaning
teed is that a result of some kind will be returned. In most that the current thread isn’t blocked while the task is run-
cases, it’s sufficient to implement tasks as independent en- ning in another thread. Hence, as stated previously, the
tities, apart from TaskCompletionSource. A good use case current thread continues its work while it waits. The async/
for TaskCompletionSource is in the API context where some await syntax alleviates you of the direct burden of estab-
kind of result, even if it’s an error, must be returned before lishing callbacks. Async programming presents you with an
yielding control back to the caller. In that regard, TaskCom- opportunity for better performing applications because it
pletionSource is a promise in the same way promises are results in more efficient use of resources. Underneath the
implemented in JavaScript. covers, .NET manages the spawning of a new thread in which
to carry out the task, making sure that result gets back to
Tasks have been a .NET feature since version 4.0, which was your calling code.
released in 2010. Despite being an available feature for over
a decade, it remains a somewhat under-utilized feature, giv- The following document list contains resources that you’ll
ing credence to the notion that sometimes, what’s old is want to review. (Note: Be sure to view the docs relevant
new! The same is true with the async/await language fea- to the .NET Framework version that you and your team are
ture introduced in C# 5 in 2012. using!)

A Related Concept: Async/Await • Task Class: https://docs.microsoft.com/en-us/dot-


Tasks are asynchronous (async). It’s impossible to discuss net/api/system.threading.tasks.task-1?view=net-5.0
Tasks without discussing the async/await language feature. • TaskCompletionSource Class: https://docs.microsoft.
Implemented in both C# 5 and VB.NET 11 in 2012, async and com/en-us/dotnet/api/system.threading.tasks.task-
await have been around for a long time! If you’re going to completionsource-1?view=net-5.0
implement tasks, you must necessarily confront asynchro- • Async keyword (C#): https://docs.microsoft.com/en-
nous programming, which is a very different concept than us/dotnet/csharp/language-reference/keywords/async

66 Simplest Thing Possible: Tasks codemag.com


• Await operator (C#): https://docs.microsoft.com/ of async programming to get up and running for most
en-us/dotnet/csharp/language-reference/operators/ use cases.
await
• Async keyword (VB.NET): https://docs.microsoft.com/ The following is a simple XUnit test that calls the WebCall
en-us/dotnet/visual-basic/language-reference/modi- Method:
fiers/async
• Await operator (VB.NET): https://docs.microsoft.    
[Fact]
com/en-us/dotnet/visual-basic/language-reference/    
public async void WebCallASyncTest()
operators/await-operator    
{
• The async programming model: https://docs.micro-    
    var result = await WebCall();
soft.com/en-us/dotnet/csharp/async    
    var content =
• Task-based async programming model: https://docs.    
       await result.Content
microsoft.com/en-us/dotnet/standard/parallel-pro- .ReadAsStringAsync();
gramming/task-based-asynchronous-programming         Assert.True(!string
.IsNullOrEmpty(content));
    }
A Simple Async Task and
Test Example Because the WebCall method is marked as async, it is await-
The following is a very simple example of implementing a able. If you wish to test the async operation, the unit test
Task in a custom method: itself must be async because of the requirement to use the
await operator. Note that the void keyword instead of Task is
    public async Task<HttpResponseMessage> used. The Task class comes in two flavors: generic and non-
       WebCall(string url = generic. For clarity purposes, if you wanted to replace the
"https://www.google.com") { void keyword with Task, you can do that. Either way, the unit
        using var client = new HttpClient(); test executes the code under test asynchronously.
        var result = await client.GetAsync(url);
        return result; What if you wanted to run an awaitable method in such way
    } that it blocked the current thread? In other words, can you
run an async method synchronously? You can do it by di-
The WebCall method is a very simple wrapper around the rectly accessing the task’s result property, which itself is
HttpClient class, which contains async methods to perform an anti-pattern! That is not to suggest that employing an
delete, get, patch, post, and put operations. HttpClient is anti-pattern is the incorrect thing to do. Employing anti-
the preferred class to interact with APIs. If you aren’t fa- patterns is often necessary based on context.
miliar with the HttpClient class, the reference documents
are a good source of information and can be found here: The following code is the previous test reworked to run
https://docs.microsoft.com/en-us/dotnet/api/system.net. async code synchronously:
http.httpclient?view=net-5.0.
   
[Fact]
   
public  void WebCallSyncTestResult()
   
{
The method signature must be    
    var result =  WebCall().Result;
   
    var content =
marked as async when the method    
        result.Content
body contains an await statement. .ReadAsStringAsync().Result;
        Assert.True(!string
.IsNullOrEmpty(content));
    }
Whenever a method body contains an await statement, the
method signature must be marked as async. The other ele-
ment to note is the return type specified in the signature:
Wrapping Existing Non-Async Code
Task<HttpResponseMessage>. The return statement itself is in a Task
of type HttpResponseMessage. Ultimately, the HttpRespon- Tasks can be very useful in offloading existing workloads to
seMessage will be returned in the future via a Task. There another thread. Perhaps you have an application that re-
are four simple elements to implement async programming quires a call to a process and you’d like the user to still be
in this first example: able to interact with the UI while it runs. If that code runs
synchronously, the UI thread will be blocked until it gets a
• Specify in the method signature that the return type is response back from this long running process. The following
wrapped in a task: Task<HttpResponseMessage>. example employs the static Task.Run method to wrap a call
• Mark the method signature as async. to a legacy method:
• Have at least one awaitable method call in the meth-
od body. [Fact]
• Return the type specified in the task generic declara- public async  void TestLegacyCallWithTask() {
tion: HttpResponseMessage. var result =
await Task.Run(
That’s all there is to it. Until use cases become more com-   () => LegacyProcess(
plex, you don’t need to be an expert in all the details new string[1] {"1234"})

codemag.com Simplest Thing Possible: Tasks 67


Figure 1: The StartNew()
method accepts additional
parameters to control how
the task is created.

  ); microsoft.com/en-us/dotnet/api/system.threading.tasks.
Assert.True(result >= 1); taskcreationoptions?view=net-5.0.
}
Cancellation Tokens
The previous example assumes a short running, non-CPU- In the previous example, take note of the cancellation token
intensive process. The Run method is a short-hand method passed as a parameter. This is one of async programming’s
for defining and launching a Task in one operation. The Run major benefits, to cancel a long running Task. If the user
method causes the task to run on a thread allocated from elects to cancel the task while it’s running, that can be
the default thread pool. accomplished because the current thread is not blocked,
allowing user interaction. A great tutorial on cancelling
What if it’s a long running process? In that case, you’d long-running tasks may be found in Brian Languas’s You-
want to use Task.Factory.StartNew(), which provides more Tube Video: https://youtu.be/TKc5A3exKBQ.
granular control over how the Task gets created by provid-
ing access to more parameters. Figure 1 illustrates the third
StartNew method signature with the additional parameters Conclusion
controlling how the Task is created. Tasks and async programming are powerful tools to add to
your development toolbox and this article has only scratched
the surface. More complex use cases include wrapping mul-
tiple tasks together and waiting for all to complete (each
Task.Run is a simpler, running in their own thread .NET’s Task and async capabili-
shorthand way to create and ties are a rich and interesting environment! If you want to
see more of this type of content, drop a line to me at CODE
start a task in one operation. Magazine and I’ll keep pumping them out. With .NET 5 and
VS Code, there’s much to distill and demystify.

 John V. Petersen
Task.Run on the other hand, is a simpler, shorthand way to 
create and start a task in one operation.

If your legacy process is long-running or CPU-intensive,


you’ll want to use the approach illustrated in the following
example:

var source = new CancellationTokenSource();


var result =  
await Task.Factory.StartNew<int>(
() => LegacyProcess(
    new string[1]  {"1234"}),source.Token);

Current docs on the Run and StartNew methods may be


found here:

• Run(): https://docs.microsoft.com/en-us/dotnet/api/
system.threading.tasks.task.run?view=net-5.0
• StartNew(): https://docs.microsoft.com/en-us/dotnet/api/
system.threading.tasks.taskfactory.startnew?view=net-5.0

For more information on Task creation options, and there


are many, that guidance may be found here: https://docs.

68 Simplest Thing Possible: Tasks codemag.com


ONLINE QUICK ID 2201101

Running Serverless Functions


on Kubernetes
Serverless functions are modular pieces of code that respond to a variety of events. It’s a cost-efficient way to implement
microservices. Developers benefit from this paradigm by focusing on code and shipping a set of functions that are triggered
in response to certain events. No server management is required and you can benefit from automated scaling, elastic load

balancing, and the “pay-as-you-go” computing model. Ku- use and configure Knative Eventing for different use-cases
bernetes, on the other hand, provides a set of primitives to in a future article.
run resilient distributed applications using modern contain-
er technology. It takes care of autoscaling and automatic
failover for your application and it provides deployment Development Set Up
patterns and APIs that allow you to automate resource man- In order to install Knative and deploy your application,
agement and provision new workloads. Using Kubernetes you’ll need a Kubernetes cluster and the following tools
requires some infrastructure management overhead and it installed:
may seem like a conflict putting serverless and Kubernetes
in the same box. • Docker Peter Mbanugo
• kubectl, the Kubernetes command-line tool [email protected]
Hear me out. I come at this with a different perspective that • kn CLI, the CLI for managing Knative application and www.pmbanugo.me
may not be evident at the moment. configuration twitter.com/p_mbanugo
Peter Mbanugo is a writer
You could be in a situation where you’re only allowed to Installing Docker and software developer who
run applications within a private data center, or you may be To install Docker, go to the URL https://docs.docker.com/ codes in JavaScript and C#.
using Kubernetes but you’d like to harness the benefits of get-docker and download the appropriate binary for your OS. He is the author of “How
serverless. There are different open-source platforms, such to build a serverless app
as Knative and OpenFaaS, that use Kubernetes to abstract Installing kubectl platform on Kubernetes”.
the infrastructure from the developer, allowing you to de- The Kubernetes command-line tool kubectl allows you to run He has experience working
ploy and manage your applications using serverless archi- commands against Kubernetes clusters. Docker Desktop in- on the Microsoft stack of
tecture and patterns. stalls kubectl for you, so if you followed the previous section technologies and also build-
on installing Docker Desktop, you should already have kubectl ing full-stack applications in
This article will show you how to run serverless functions installed and you can skip this step. If you don’t have kubectl JavaScript. He’s a co-chair
using Knative and Kubernetes. installed, follow the instructions below to install it. on NodeJS Nigeria, a Twilio
Champion, and a contributor
If you’re on Linux or macOS, you can install kubectl using to the Knative open-source
Introduction to Knative Homebrew by running the command brew install kubectl. project.
Knative is a set of Kubernetes components that provides server- Ensure that the version you installed is up to date by run-
He’s the maker of Hamoni
less capabilities. It provides an event-driven platform that can ning the command kubectl version --client.
Sync, a real-time state
be used to deploy and run applications and services that can
synchronization as a service
auto-scale based on demand, with out-of-the-box support for If you’re on Windows, run the command curl -LO https:// platform. He works with
monitoring, automatic renewal of TLS certificates, and more. dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl. foobar GmbH as a Senior
exe to install kubectl, and then add the binary to your PATH. Software Consultant.
Knative is used by a lot of companies. In fact, it powers Ensure that the version you installed is up to date by running
the Google Cloud Run platform, IBM Cloud Code Engine, and the command kubectl version --client. You should have ver- When he isn’t coding, he
Scaleway serverless functions. sion 1.20.x or v1.21.x because in a future section, you’re go- enjoys writing the technical
ing to create a server cluster with Kubernetes version 1.21.x. articles that you can find on
The basic deployment unit for Knative is a container that his website or other publica-
can receive incoming traffic. You give it a container image Installing kn CLI tions, such as on Pluralsight
to run and Knative handles every other component needed kn CLI provides a quick and easy interface for creating Kna- and Telerik.
to run and scale the application. The deployment and man- tive resources, such as services and event sources, without
agement of the containerized app is handled by one of the the need to create or modify YAML files directly. kn also sim-
core components of Knative, called Knative Serving. Knative plifies completion of otherwise complex procedures, such as
Serving is the component in Knative that manages the de- autoscaling and traffic splitting.
ployment and rollout of stateless services, plus its network-
ing and autoscaling requirements. To install kn on macOS or Linux, run the command brew
install kn.
The other core component of Knative is called Knative Event-
ing. This component provides an abstract way to consume To install kn on Windows, download and install a stable bi-
Cloud Events from internal and external sources without nary from https://mirror.openshift.com/pub/openshift-v4/
writing extra code for different event sources. This article clients/serverless/latest. Afterward, add the binary to the
focuses on Knative Serving, but you’ll learn about how to system PATH.

codemag.com Running Serverless Functions on Kubernetes 69


ates the images required to run the Kubernetes server as
containers.

The status of Kubernetes shows in the Docker menu and the


context points to docker-desktop, as shown in Figure 2.

Create a Cluster with kind


You can also create a cluster using kind, a tool for running
local Kubernetes clusters using Docker container nodes. If
you have kind installed, you can run the following command
to create your kind cluster and set the kubectl context.

curl –sL \
https://raw.githubusercontent.com/csantanapr\
/knative-kind/master/01-kind.sh | sh

Create a Cluster with DigitalOcean Kubernetes Service


You can also use a managed Kubernetes service like Digi-
talOcean Kubernetes Service. In order to use DigitalOcean
Kubernetes Service (http://digitalocean.com/products/ku-
bernetes/), you need a DigitalOcean account. If you don’t
have an account, you can create one using my referral link
-https://m.do.co/c/257c8259d8ef, which gives you $100
Figure 1: Enable Kubernetes on Docker Desktop credit to try out different things on DigitalOcean.

You’ll create a cluster using doctl, the official command-


line interface for the DigitalOcean API. After you’ve created
a DigitalOcean account, follow the instructions on docs.
digitalocean.com/reference/doctl/how-to/ to install and
configure doctl.

After you’ve installed and configured doctl, open your com-


mand line application and run the command below in order
to create your cluster on DigitalOcean.

doctl kubernetes cluster \


create serverless-function \
--region fra1 --size s-2vcpu-4gb \
--count 1

Wait for a few minutes for your cluster to be ready. When it’s
done, you should have a single-node cluster with the name
serverless-function, in Frankfurt. The size of the node is a
computer with two vCPUs, and 4GB RAM. Also, the command
you just executed sets the current kubectl context to that of
the new cluster.
Figure 2: kube context
You can modify the values passed to the doctl kubernetes
cluster create command. The --region flag indicates the
Creating a Kubernetes Cluster cluster region. Run the command doctl kubernetes options
You need a Kubernetes cluster to run Knative. You can use a regions to see possible values that can be used. The computer
local cluster using Docker Desktop or kind. size to use when creating nodes is specified using the --size
flag. Run the command doctl kubernetes options sizes for a
Create a Cluster with Docker Desktop list of possible values. The --count flag specifies the number
Docker Desktop includes a stand-alone Kubernetes server of nodes to create. For prototyping purposes, you created a
and client. This is a single-node cluster that runs within a single-node cluster with two vCPUs and 4GB RAM.
Docker container on your local system and should be used
only for local testing. Check that you can connect to your cluster by using kubectl
to see the nodes. Run the command kubectl get nodes. You
To enable Kubernetes support and install a standalone in- should see one node in the list, and the STATUS should be
stance of Kubernetes running as a Docker container, go to READY, as shown in Figure 3.
Preferences > Kubernetes and then click Enable Kuber-
netes.
Install Knative Serving
Click Apply & Restart to save the settings and then click Knative Serving manages service deployments, revisions,
Install to confirm, as shown in Figure 1. This instanti- networking, and scaling. The Knative Serving component

70 Running Serverless Functions on Kubernetes codemag.com


exposes your service via an HTTP URL and has safe defaults --patch \
for its configurations. '{"data":{"ingress.class":\
"kourier.ingress.networking.knative.dev"}}'
For kind users, follow the instructions below to install Kna-
tive Serving. 5. Verify that Knative is Installed properly. All pods
should be in Running state and the kourier-ingress
1. Run the command curl -sL https://raw.githubusercon- service configured, as shown in Figure 4.
tent.com/csantanapr/knative-kind/master/02-serv-
ing.sh | sh to install Knative Serving. ~ kubectl get pods -n knative-serving
2. When that’s done, run the command curl -sL https:// ~ kubectl get pods -n kourier-system
raw.githubusercontent.com/csantanapr/knative-kind/ ~ kubectl get svc -n kourier-system
master/02-kourier.sh | sh to install and configure Kourier.
6. Configure DNS for Knative Serving. You’ll use a wild-
For Docker Desktop users, run the command curl -sL https:// card DNS service for this exercise. Knative provides a
raw.githubusercontent.com/csantanapr/knative-docker- Kubernetes Job called default-domain that will only
desktop/main/demo.sh | sh. work if the cluster’s LoadBalancer Service exposes an
IPv4 address or hostname. Run the command below to
Follow the instructions below to install Knative in your Digi- configure Knative Serving to use sslip.io as the default
talOcean cluster. The same instructions will also work if you DNS suffix.
use Amazon EKS or Azure Kubernetes Service.
~ kubectl apply -f \ A Good Book to Read
1. Run the following command to specify the version of https://github.com/knative/serving/releases/\
If you’re interested in
Knative to install. download/v$KNATIVE_VERSION/\
serverless, Kubernetes, and
serving-default-domain.yaml
Knative, you should check
export KNATIVE_VERSION="0.26.0" out my book “How to build
If you want to use your own domain, you’ll need to configure a serverless application
2. Run the following commands to install Knative Serving your DNS provider. See https://knative.dev/docs/admin/in- platform on Kubernetes.
in namespace knative-serving. stall/serving/install-serving-with-yaml/#configure-dns for You’ll learn how to build
instructions on how to do that. a serverless platform that
~ kubectl apply –f \ runs on Kubernetes using
https://github.com/knative/serving/releases/\ GitHub, Tekton, Knative, Cloud
download/v$KNATIVE_VERSION/serving-crds.yaml Serverless Functions on Kubernetes Native Buildpacks, and other
func is an extension of the kn CLI, that enables the devel- interesting technologies.
~ kubectl wait --for=condition=Established \ opment and deployment of platform-agnostic functions as You can find it on https://
--all crd a Knative service on Kubernetes. It comprises of function books.pmbanugo.me/
templates and runtimes and uses Cloud Native Buildpacks to serverless-app-platform.
~ kubectl apply -f \ build and publish OCI images of the functions.
https://github.com/knative/serving/releases/\
download/v$KNATIVE_VERSION/serving-core.yaml To use the CLI, install it using Homebrew by running the
command brew tap knative-sandbox/kn-plugins && brew
~ kubectl wait pod --timeout=-1s \ install func. If you don’t use Homebrew, you can download
--for=condition=Ready -l '!job-name' \ a pre-built binary from https://github.com/knative-sand-
-n knative-serving > /dev/null box/kn-plugin-func/releases/tag/v0.18.0, then unzip and
add the binary to your PATH.
3. Install Kourier in namespace kourier-system.
Functions can be written in Go, Java, JavaScript, Python,
~ kubectl apply -f \ and Rust. You’re going to create and deploy a serverless
https://github.com/knative/net-kourier/\ function written in JavaScript.
releases/download/v0.24.0/kourier.yaml
Create a Function Project
~ kubectl wait pod \ To create a new JavaScript function, open your command-
--timeout=-1s \ line application and run the command kn func create
--for=condition=Ready \ sample-func --runtime node. A new directory named
-l '!job-name' -n kourier-system sample-func will be created and a Node.js function project
will be initialized. Other runtimes available are: Go, Python,
~ kubectl wait pod \ Quarkus, Rust, Spring Boot, and TypeScript.
--timeout=-1s \
--for=condition=Ready \ The func.yaml file contains configuration information for
-l '!job-name' -n knative-serving the function. It’s used when building and deploying the

4. Run the following command to configure Knative to use


Kourier.

~ kubectl patch configmap/config-network \


--namespace knative-serving \
--type merge \ Figure 3: kubectl get nodes

codemag.com Running Serverless Functions on Kubernetes 71


};
}

Deploy the Function


To deploy your function to your Kubernetes cluster, use the
deploy command. Open the terminal and navigate to the
function’s directory. Run the command kn func deploy to
deploy the function. Because this is the first time you’re
deploying the function, you’ll be asked for the container
registry info for where to publish the image. Enter your reg-
istry’s information (e.g., docker.io/<username>) and press
ENTER to continue.
Figure 4: Verify installation
The function will be built and pushed to the registry. After
that, it’ll deploy the image to Knative and you’ll get a URL
for the deployed function. Open the URL in a browser to see
the returned object. The response you get should be similar
to what you see in Figure 5.

You can also run the function locally using the command
Figure 5: Function invocation response kn func run.

function. You can specify the buildpack to use, environment Other Useful Commands
variables, and options to tweak the autoscaling options for You’re now familiar with creating and deploying functions
the Knative Service. Open the file and update the envs field to Knative using the build and deploy commands. There are
with the value below: other useful commands that can come in handy when work-
ing with functions. You can see the list of commands avail-
- name: TARGET able using the command kn func --help.
value: Web
The deploy command builds and deploys the application. If
The index.js file contains the logic for the function. You can you want to build an image without deploying it, you can
add more files to the project, or install additional depen- use the build command (i.e., kn func build). You can pass
dencies, but your project must include an index.js file that it the flag -i <image> to specify the image, or -r <registry>
exports a single default function. Let’s explore the content to specify the registry information.
of this file.
The kn func info command can be used to get information
The index.js file exports the invoke(context) function that about a function (e.g., the URL). The command kn func list
takes in a single parameter named context. The context on the other hand, lists the details of all the functions you
object is an HTTP object containing the HTTP request data have deployed.
such as:
To remove a function, you use the kn func delete <name>
• httpVersion: The HTTP version command, replacing <name> with the name of the function
• method: The HTTP request method (only GET or POST to delete.
supported)
• query: The query parameters You can see the configured environment variables using the
• body: Contains the request body for a POST request command kn func config envs. If you want to add an environ-
• headers: The HTTP headers sent with the request ment variable, you can use the kn func config envs add com-
mand. It brings up an interactive prompt to add environment
The invoke function calls the handlePost function if it’s a variables to the function configuration. You can remove envi-
POST request, or the handleGet function when it’s a GET re- ronment variables as well using kn func config envs remove.
quest. The function can return void or any JavaScript type.
When a function returns void, and no error is thrown, the call-
er will receive a 204 No Content response. If you return some What’s Next?
value, the value gets serialized and returned to the caller. Serverless functions are pieces of code that take an HTTP
request object and provide a response. With serverless func-
Modify the handleGet function to return the value from the tions, your application is composed of modular functions
TARGET environment variable, the query paramteter, and that respond to events and can be scaled independently.
date. Open index.js and update the handleGet function In this article, you learned about Knative and how to run
with the function definition below: serverless functions on Kubernetes using Knative and the
func CLI. You can learn more about Knative on knative.dev,
function handleGet(context) { and a cheat sheet for the kn CLI is available on cheatsheet.
return { pmbanugo.me/knative-serving.
target: process.env.TARGET,
query: context.query,  Peter Mbanugo
time: new Date().toJSON(), 

72 Running Serverless Functions on Kubernetes codemag.com


CODE COMPILERS

(Continued from 74) organization is to adopt Agile, software devel-


opment and the business must converge so that
a time, it stated that scrum was simple to under- they can work consistently toward the same goals
stand but difficult to master. I believed that when and objectives. Jan/Feb 2022
it was written, and I still believe that today, be- Volume 23 Issue 1
cause it’s true. The problem with the Scrum Guide What is your software supposed to do? What is
is that it no longer states that simple, plain truth. it not supposed to do? There are two essential Group Publisher
elements, two sides of the software development Markus Egger
In the 2020 version, that statement was reduced coin: A): IT, and B): the business. Between the Associate Publisher
to “scrum is simple.” The 2020 guide also states two, there’s a chasm that must be spanned. The Rick Strahl
that Scrum is “purposely incomplete.” When the notion of a Ubiquitous Language, a term coined Editor-in-Chief
revision history between 2017 and 2020 is re- in the software development context in 2002, was Rod Paddock
viewed (http://www.scrumguides.org/revisions. on the right track but missed the mark through
Managing Editor
html), you can note that, among other things, all its mechanics and ceremony. And it should be Ellen Whitney
scrum has become less prescriptive. noted that IT serves the business, period.
Contributing Editor
John V. Petersen
All software has business requirements. Such re-
quirements are going to be articulated in busi- Content Editor
Scrum is simple to understand ness terms by the business, as they should be, Melanie Spiller

but difficult to master. because it’s the business that owns the software Editorial Contributors
and will be the arbiter of whether the software Otto Dobretsberger
Jim Duffy
works and consequently, whether it delivers Jeff Etter
value. It’s incumbent on software developers to Mike Yeager
There’s also the change in wording from self- be the Rosetta Stone to make the translation. We
Writers In This Issue
managing to self-organizing. Is that a distinc- must become experts enough in the business to Bilal Haidar Joydip Kanjilal
tion without a difference? Is that plain, simple do that. The business isn’t going to be experts in Sahil Malik Peter Mbanugo
language that leads to better understanding? No. software development. Rod Paddock John V. Petersen
Paul D. Sheriff Shawn Wildermuth
That’s a subjective call. It’s one that each team Mike Yeager
and organization must answer for itself. In my experience, open, honest, and transparent
communications is a great place to start. At the Technical Reviewers
Markus Egger
This is also not a rant against scrum, whether beginning of every project, people want the same Rod Paddock
it’s the scrum.org camp or the scrumalliance.org thing. Agile may or may not be better than Water-
Production
camp. Although there are two different organi- fall in a given context; it all depends on your or- Friedl Raffeiner Grafik Studio
zations, and they both point to the same Scrum ganization. After all, context is a necessary factor www.frigraf.it
Guide (www.scrumguides.org) and the Agile Man- in assessing whether language is plain and simple. Graphic Layout
ifesto (www.agilemanifesto.org). How each camp We must always consider who the audience is. It’s Friedl Raffeiner Grafik Studio in collaboration
with onsight (www.onsightdesign.info)
describes scrum isn’t always consistent. Heck, an obvious point perhaps, but it’s one worth re-
I’ve been practicing Agile principles in one form peating. Employing plain, simple language where Printing
or another for over 20 years and I’m confused. individuals and interactions work and collaborate Fry Communications, Inc.
800 West Church Rd.
It’s for that reason that I’ve decided to use the to build, modify, and deliver working software that Mechanicsburg, PA 17055
Agile Manifesto as my sole basis to define Agile, yields value is, in my opinion, a more straightfor-
which bears repeating in its entirety here: ward way of articulating the Agile Manifesto’s true Advertising Sales
Tammy Ferguson
meaning and intent. As for the Agile Principles, 832-717-4445 ext 26
We are uncovering better ways of developing that’s up to you and your organization. There is, or [email protected]
software by doing it and helping others do it. at least there should be, a shared goal of getting
Circulation & Distribution
Through this work, we have come to value: things accomplished. Only you and your team can General Circulation: EPS Software Corp.
assess what those goals and objectives should be. Newsstand: American News Company (ANC)
• Individuals and interactions over processes Principles, absent context and what they are sup- Media Solutions
and tools posed to mean, despite what appears to be plain, Subscriptions
• Working software over comprehensive doc- simple language, are useless. Subscription Manager
umentation Colleen Cade
[email protected]
• Customer collaboration over contract nego- Every successful project has one thing in common:
tiation good people that had a shared goal. Yes, mistakes US subscriptions are US $29.99 for one year. Subscriptions
• Responding to change over following a plan along the way are made. We never get it right the outside the US are US $50.99. Payments should be made
in US dollars drawn on a US bank. American Express,
first or second time in most cases. For us techies, MasterCard, Visa, and Discover credit cards accepted.
That is, while there is value in the items on the let’s avoid tech talk. And if the business is lack- Bill me option is available only for US subscriptions.
right (processes and tools), we value the items ing clarity on something, ask for more clarity. If Back issues are available. For subscription information,
e-mail [email protected].
on the left (individuals and interactions) more. something isn’t feasible, explain why not in plain,
simple language that conveys the right concerns Subscribe online at
The Agile Manifesto nailed it as far as A): plain, sim- that the business can act upon. If we don’t do www.codemag.com
ple language, and B): what could be built from it. that, then how will we understand each other? CODE Developer Magazine
And if we can’t do that, how will we ever build 6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
Is Agile alone enough to build software? Of and deliver working software that delivers value? Phone: 832-717-4445
course not. Agile doesn’t exist in a vacuum. Soft-
ware development doesn’t exist in a vacuum ei-  John V. Petersen
ther. Business doesn’t exist in a vacuum. If an 

codemag.com CODA: On Rules and Procedures 73


CODA

CODA:
On Plain, Simple Language
The Danish physicist and Nobel Laureate Niels Bohr believed that any concept, no matter how
complex, should be explainable in plain, simple language. Bohr was instrumental in aiding
understanding of Quantum Theory, a very complex subject indeed. He believed that once the heavy

lifting was completed as to any theory, the key to question of how we, the two sides, build “qual- Does this mean that developers aren’t or can’t be
advancement was in a collective understanding ity” software. business people? Are these mutually exclusive?
of what something generally is and its associated What does it mean to “work together?” What
benefits. Not everyone was equally an expert in Quality can be a bit of a subjective, loaded term. about understanding each other?
Quantum Theory. And there was only one Niels In this context, is the software easy to under-
Bohr. Although he could have a conversation stand? Does it work according to customer’s ex- How about Principle Three: Deliver working soft-
with others in his field on an equal footing, oth- pectations? That’s a practical, not a theoretical ware frequently, from a couple of weeks to a cou-
ers needed to have some basic understanding concern. If the software doesn’t work, all the ple of months, with a preference for the shorter
of what these complex and abstract ideas could theory in the world won’t matter. The art of soft- timescale.
practically accomplish. Institutions and govern- ware development, as I see it, is the proper mix
ments funded research, but sanction and funding of theory and pragmatism. There’s a lot of good A couple of weeks vs. a couple of months? The
were ultimately implemented by people; people theory in the DDD book and other resources, with term “couple” is more idiomatic English than a
who weren’t on the same intellectual plane as some of it being useful in a pragmatic sense. It’s precise ordinal measure. Is it two weeks?
Niels Bohr. always important to remember that the value of
theory is a function of its application in the real The principles declare a preference for “the short-
Simple does not mean dumbed-down. Simple world. er timescale.” Scrum, an agile-based framework,
means not complex. We can understand, in gen- arguably conflicts, or, at the very least, makes
eral terms, how a clock, a car, or a computer works One possible answer to address how to employ ambiguous, whether Principle Three is a bona-fide
without knowing how to build those things. Think plain, simple language is the Agile Manifesto and agile requirement because at the conclusion of
of an elevator pitch and what’s required for it to its 12 Principles (http://www.agilemanifesto. each sprint, the aim is to deliver “potentially re-
work. If an elevator ride lasts no more than 15 org/principles.html), in which the principles leasable software.” If you don’t release software,
seconds, in that time, to get to the next step, the themselves are based on the Agile Manifesto it isn’t delivered. And if it isn’t delivered, it isn’t
idea or concept must be expressed in language (http://www.agilemanifesto.org). The Agile delivering any value. Just ask any businessper-
that anybody can understand. Understanding Manifesto and its 12 Principles are abstract con- son who’s funding the project with a budget and
does not require expertise. If getting to the next cepts and are arguably written in plain, simple business objectives! The conflict here is that on
step required equal footing on the same exper- language. That’s a subjective call based on a one hand, scrum is based on Agile Principles. On
tise, nothing would ever be accomplished. We’re shared point of view. the other hand, arguably, scrum, via the Scrum
just trying to get to the next step, not solve every Guide (http://www.scrumguides.org) appears
problem immediately. The one thing I always found interesting about to throw this Agile principle out the window! In
the Agile principles is the relative position of other words, as plain and simple as the language
principle numbers One and Seven: appears, when the one thing (the collection of
What, then, qualifies as “plain, simple lan- principles) conflicts with the other thing (the
guage?” That very much depends on your point • Principle One: Our highest priority is to manifesto) it’s based upon, is the language really
of view and how that point of view meshes with satisfy the customer through early and con- plain and simple? To the contrary, the language
other points of view. Those points of view must tinuous delivery of valuable software. is ambiguous and becomes less and less useful.
be reconciled for it to seem simple to all parties. • Principle Seven: Working software is the In such a case, the gulf between the theory and
primary measure of progress. its practical usefulness increases. Theory, without
A specific area in software development where a any sort of practical application in the business
common language between developers and users Why aren’t these two principles adjacent to one context, is useless. In my opinion, Agile has drift-
is in Domain Driven Design (DDD), an approach another? Is that important? The only way soft- ed away from its roots in the manifesto, in part
developed by Eric Evans. In that work, he coined ware can deliver value to the business is if it’s because of a lack of the plain, simple language
the term Ubiquitous Language. Evans stated: A): delivered, and B): works per the business’s that leads to better understanding.
requirements. Reading the principles, the two
DDD is a very good book for outlining what the are disconnected. Their relative position to one I’ve always adhered to one basic principle: When
two sides of the software development coin, de- other is itself important context that aids under- in doubt, go back to basics. This isn’t a rant
velopers and users, should do. If it were only standing and thus goes to whether the language against Agile. But of late, there has been some
that easy, to just speak the same language, to is truly “plain, and simple!” This is where having revolt against Agile and its progeny—scrum, in
use common terms that reference the same com- a good editor matters. particular—citing that Agile just doesn’t work.
monly accepted thing. Bounded contexts and The Scrum Guide, where scrum is codified, has
well-defined domain models sound good. Those Then there’s Principle Four: Business people and been “tinkered” with over the years. Once upon
things, which are DDD elements, are themselves developers must work together daily throughout
the end-result of some other process. It begs the the project. (Continued on page 73)

74 CODA: On Rules and Procedures codemag.com


UR
GET YO R
OU
FREE H

TAKE
AN HOUR
ON US!

Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience in .NET,
.NET Core, web development, Azure, custom apps for iOS and Android and more, CODE Consulting can
get your software project back on track.

Contact us today for a free 1-hour consultation to see how we can help you succeed.

codemag.com/OneHourConsulting
832-717-4445 ext. 9 • [email protected]
shutters
tock/Lu
cky-pho
tograp
her
NEED
MORE OF THIS?

Is slow outdated software stealing way too much of your free time? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.

Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.

codemag.com/modernize
832-717-4445 ext. 9 • [email protected]

You might also like