CODEMagazine 2022 JanuaryFebruary
CODEMagazine 2022 JanuaryFebruary
JAN
FEB
2022
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95
Kafka
Event-Streaming Platform
Here are just a few of the topics covered in sessions and workshops:
.NET 6 • .NET MAUI • BLAZOR • C# 10 • ANGULAR • AZURE • AI
AZURE SQL • MODERN DATA • SQL SERVER • KUBERNETES • DEVOPS
PROJECT DESIGN & UI • SECURITY • MACHINE LEARNING • AZURE LOGIC
MICROSOFT POWER PLATFORM • MICROSOFT TEAMS • MICROSOFT SHAREPOINT
Features
8 The Basics of Git 62 Minimal APIs in .NET 6
If you haven’t heard of Git, you’ve clearly been off the grid for a long Controller-based APIs have been around for a long time, but .NET 6
time. Sahil talks about this ubiquitous tool, and maybe shows you changes everything with a new option. Shawn shows you how it works.
something you didn’t know about it. Shawn Wildermuth
Sahil Malik
Columns
Mike Yeager
US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $50.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to [email protected] or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
Finding Inspiration
This editorial marks a huge milestone in my life: It officially marks the end of my first 20 years as editor in
chief of CODE Magazine. And before you get any ideas, this is NOT my last editorial! I have many more
years ahead of me to “entertain” you with my witty banter and deep knowledge of software engineering,
science, music, and Dungeons & Dragons. Some Glacial Rift of the Frost Giant Jarl” Yes that was the ing about it. Like all new writers, I got a TON of
of that statement is true. title and some 40 years later, I can still recite the rejection letters, but I persevered and had some
names of many of these modules. The names were minor successes. These successes drove me forward.
When I started thinking about this editorial, I re- epic. D&D was (and still is) is an amazing game and
viewed a bunch of my past editorials and was proud of it took me to many mythological as well as real-
what we’ve accomplished at CODE Magazine in the last world places. From Greek to Roman to Norse to Programming
20 years. It’s amazing how much things have changed Tolkien, every mythology was represented. As for I was determined to be a writer until I discovered
in that time. The early 2002 issues were all about this the real-world places, I met many other gamers in programming. Programming has always been fun
new “.NET Initiative,” Web Services, XML, and XSLT. high school (I was president of the Golden Dragon for me and when I discovered databases in college,
The cloud was non-existent at the time, there was no Club at one point) and at numerous conventions in I knew what I wanted to do for a career. I started in
Twitter, no Facebook—heck, Amazon’s primary busi- places like Los Angeles and Milwaukie. I was deep the DOS era and have continued to write code for
ness was selling books. How things have changed! into this game, and that inspired me to start writ- over 30 years now. I still find enjoyment in slinging
code to this day. Over the years, I’ve had
As some of you know, I was hired as EIC of the opportunity to work with some great
CODE Magazine via an instant messaging ses- developers and have grown to be a fairly
sion (ICQ I think) and, to be honest, this was a skilled programmer myself. But however
dream come true. From the time I was in high skilled I became, I missed writing. It was
school, I dreamt of being a writer. Did I want programming skills that eventually led me
to be a tech writer? Heck no! I wanted to be back to writing.
a Dungeons & Dragons writer. That dream was
partially filled in high school when I published
my first D&D article called “The Role of Taxes.” Writing
I was a geek then and I’m a true geek now. So, In 1992, I decided to see if I could get pub-
for those long-time readers (and new ones of lished in a computer magazine. I went to a
course), where am I going with this? Well, I software conference and proposed an idea
want to talk about inspiration. to Dian Schaffhauser who was an editor at
Database Advisor Magazine. She accepted
As I was thinking about the things that in- and I went to work writing my first article.
spired me and how to best represent them One article led to another, and another,
in this editorial, I decided on a picture. Like and another and eventually it led to writing
many geeks, I’ve spent decades collecting books and finally to being EIC of CODE Mag-
various geek trophies of things I enjoy. azine. I’ve never stopped writing. As a mat-
Figure 1 shows a bookshelf containing the ter of fact, I wrote an article for this issue!
many, many things that provide me with Writing has been a source of inspiration to,
comfort and inspiration. I’m going to high- well, keep writing. It’s a sickness, I think.
light a few of them. Talk to me about writing books some time.
6 Editorial codemag.com
CUSTOM SOFTWARE DEVELOPMENT
STAFFING TRAINING/MENTORING SECURITY
Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.
codemag.com/code
832-717-4445 ext. 9 • [email protected]
codemag.com
ONLINE QUICK ID 2201021
such as Azure DevOps, Bitbucket, and Atlassian all support The opposite of centralized source control is decentralized
Git. First things first. I’m going to avoid the lightning rod source control, of which Git is an example. In decentralized
discussion of whether Git is a good product or not. The real- source control mechanism, you can have many locations
ity is that whether you like it or not, all of us use it. And with the source control repo. These locations can be servers,
let’s be honest: It has proven to be scalable enough for the or they can even be your own local hard disk, or they can
largest source code repositories, and it’s pretty easy to get be a coworker’s hard disk. You can merge changes between
started with, too. these multiple source control repos. Also, you don’t rely on
exclusive check-in and check-out anymore. Instead, you rely
Yet it’s one of those products that really drives me mad. So on merges and commits. This invariably has the downside of
Sahil Malik I thought it might be worth writing an article, explaining merge conflicts. Good coding patterns, good architectural
www.winsmarts.com the basics of Git. With a strong foundation, you can build practices, and writing good tests reduce this pain to some
@sahilmalik taller buildings. degree, although don’t eliminate it.
Sahil Malik is a Microsoft Let’s start learning Git.
MVP, INETA speaker, Centralized Source Control vs.
a .NET author, consultant, Decentralized Source Control
and trainer.
If you’ve worked with older versions of source control soft- Install Git
Sahil loves interacting with ware, such as Visual SourceSafe, Mercurial, PVCS, or many Many development tools, such as XCode, already come with
fellow geeks in real time. others before that, you’re familiar with centralized source Git packaged. Even if you already have Git on your computer,
His talks and trainings are control. In centralized source control, there’s a server in the it’s a good idea to update it. The instructions are unique
full of humor and practical middle that all developers talk to. Any software project is per operating system. Rather than rehashing instructions
nuggets. comprised of many files. If you wish to work on a certain file, here, I suggest that you visit https://git-scm.com/book/en/
you check out that file. While that file is checked out, its sta- v2/Getting-Started-Installing-Git and follow the instructions
His areas of expertise are tus is marked checked out in the centralized source control per your operating system and install Git on your computer.
cross-platform Mobile app repo. If any other developer wishes to overwrite that file,
development, Microsoft they’re unable to, because it’s checked out to you. You need Once you’ve installed it, you should be able to run the com-
anything, and security to check in your changes first, and the other developer’s mand “git” on terminal. For Windows, you’ll notice that af-
and identity.
changes are the other developer’s headache. The other de- ter installation, you get a special terminal called “Git bash”.
veloper must probably do a merge or something similar. All This is a special terminal/command window on Windows
of this works fine, but it has two main problems. that tries to emulate a Unix-like terminal. You’re welcome
to use it, although I’ve also used Git through the Power-
The first issue is what happens if that central server goes Shell window and never run into any issues. I do feel that
down. You can continue to work on the previous snapshot you should lean on a Unix-like terminal even on Windows,
you pulled from the server. But sooner or later, when you because a lot of commands invariably end up making use of
need to re-sync your changes to the server or check-in files, Unix-like commands intertwined with Git commands. Most
you hit a wall. You can’t for instance, continue working with devs mix and match them without even thinking about it.
source control locally. For that matter, you can’t work with
an alternate remote upstream location for the meantime,
such as a co-worker’s source control. And what if you want Configure Git
source control on just your computer, without any need to Before you can use Git, you have to do some basic configu-
share with rest of the world, for a pet project that’s complex ration. At the bare minimum, you’ll need to specify a name
enough to deem source control? and email—this is your information, who are you when you
issue a commit. Of course, the server-side repo also authen-
The second issue, of course, is scale. Centralized source con- ticates you through the various means that Git supports.
trol repos assume a small set of developers working very
closely together. These days, we all contribute to very large You can also optionally specify a default editor and I highly
source control repos, which are typical in popular open- recommend that you specify a line ending format as well.
source projects. A centralized source control mechanism
that relies on locking in a central location to talk with sim- Let’s perform this basic configuration on your computer.
ply doesn’t scale to the general complexity of large-scale
repos and a disconnected working model. When you perform Git configuration, you can do so at one of
three levels. You can do so at a global level, which affects all us-
Both of these issues are fuzzy in nature. Visual SourceSafe ers on your computer. You can do so in your user profile, in which
fans insist that there are workarounds to these problems. case it will affect all work on the user’s profile. Or you can specify
But just because you can row to Japan in a tiny boat doesn’t at a folder level, where you wish to have certain settings affect
mean it’s a good idea. To the rest of the world, it’s clear that only certain repos. These settings go in a hidden file called “.git-
we need a new approach, a decentralized source control. config”. My gitconfig looks like that shown in Figure 1.
Next, let’s specify a default editor. By default, Git uses Vim. A lot
of people love Vim. Personally, I never have to restart my Mac
unless I’m trying to exit Vim. There are just too many damned
shortcut keys to remember. No, I don’t dislike it; in fact when
I am ssh’ed into a Docker container, using something such as
VSCode may not be an option. But I do find myself more produc-
tive in VSCode, so I’ll just set that as my default editor as follows.
As you can see, VSCode pops open with your .gitconfig set-
tings. No longer do you have to remember the shortcut
“shift_ZZ” to save and exit, because this isn’t Vim.
Figure 2: Git with tab completion
Figure 1 shows my settings that have a few additional things I
haven’t talked about. Your settings file may look slightly different.
One other thing I highly recommend is that if you’re on a
Finally, let’s configure end of line settings. This is a very Mac or Linux environment, set up zsh with a theme called
important setting, so let’s understand what this is. On Win- “oh-my-zsh”. It makes great use of the Git plug-in and gives
dows, an end of line looks like this: you syntax highlighting on terminal and even tab comple-
tion. This can be seen in Figure 2. On Windows, you can
text\r\n either use the instructions at https://winsmarts.com/run-
ning-oh-my-zsh-on-windows-10-6fcb0fbc736b, you can set
On Mac/Linux, try this: up WSL2, or you can use posh-git.
text\n
Initialize a Git Repo
Notice the difference? Windows likes to use carriage return A Git repo, or repository for short, lives in a folder. Go ahead
and new line. The reasons for this are historical, and so deep- and create a new folder. I created one called “gitlearn”. To
ly rooted that Windows isn’t going to change. But this creates initialize a new empty repository in this folder, when inside
a big problem when some of your developer friends are on this folder in terminal, issue the following command:
Macs and you’re on Windows. In fact, when contributing to
OSS projects, this will invariably be the case. So as a best git init
practice, perform the following configuration on Windows,
This creates a new Git repository in this folder. Additionally, it
git config --global core.autocrlf true creates one branch in this empty repository. The name of the
branch by default is “master” although they let you configure
This will cause Git to strip out the /rs (carriage returns) it to use “main” by default if you prefer. You can choose to
when checking your files in. change the name of the initial default branch as follows:
Now let’s also add a server briefly. When you add commit-
ted code to your local repo, you can choose to “push” to
an upstream location. That upstream location is the server.
During the push, you may have to resolve conflicts, merge
your team members’ code, etc.
Finally, you have an area that sits in the middle of the com-
mitted area, and the working area, which is the staging
area. The staging area is your “proposal to commit.” Think
of it as: Okay I’ve been working on stuff in my local area,
Figure 4: Git status tells me that I have an untracked file. and I’m ready to commit. And here is my proposal of what I
want to commit. For instance, I propose these three files to
be added, these two files to be renamed, etc. I “stage” those
changes in the staging area before I commit. And then I
Figure 5: I have staged, but not yet committed. commit.
git commit
This should pop open your default editor, in my case VSCode, Instantly Search
Terabytes
to enter a commit message. I could also say Git commit
-m “message” to avoid opening the editor. I’ll enter some
message like “My first commit” and save to commit. Now my
prompt should change as shown in Figure 6.
Congratulations, you just did your first commit. Now try ex-
ecuting Git status again. It should tell you that your working
tree is clean. You can see your history of commits by execut-
ing the following command.
dtSearch’s document filters support:
git log • popular file types
At any point, I really encourage you to type -h in front of any • emails with multilevel attachments
command and examine what other options are supported.
• a wide variety of databases
That was fun, but there’s still so much more to learn: • web data
branches, server-based stuff, forking, merging. So stay the
course, young Padawan.
Now wait a second. Let’s unpack this a bit. What’s that funny
looking syntax? How did Git authenticate me?
First of all, Git remote lets me manage tracked remote re- Visit dtSearch.com for
positories. By saying Git remote add, I’m saying that I wish • hundreds of reviews and case studies
to add a remote tracked repository. The origin keyword in-
dicates that here is where this project was originally cloned • fully-functional enterprise and
from. You didn’t clone your project from the remote reposi- developer evaluations
tory, but you’re basically saying that in the process of set-
ting things up, in future, developers can clone from here.
The final parameter is the URL. The Smart Choice for Text
The way authentication works here is that by default, GitHub
Retrieval® since 1991
uses username password. But that’s neither secure nor
manageable. So it also supports ssh, which is what I have dtSearch.com 1-800-IT-FINDS
set up on my computer. Finally, you can use credential help-
ers to use alternate mechanisms of authentication as well.
git push
This makes sense if you think about it. I never told my Git
repo which upstream location “newchange” should be sent
to. And it gives me a helpful command to fix it. So go ahead
and run that command, which then sets the upstream loca-
tion and pushes my changes.
Use that “Compare & pull request” button to create a PR. This You created a new folder and moved the secondfile.md into
gives you a nice overview of the changes, the comments, files that folder, effectively deleting it from the root folder and
committed, approvers, labels, etc., which is great for a devel- adding a new file in afolder. Then you renamed readme.md
opment workflow. You can also set up bots to do some basic to dontreadme.md.
review for you, and all sorts of other automation involving
humans. When you’re done, you can merge the pull request, Now, running a Git status basically tells me everything that
and delete the branch, as shown in Figure 13. I just did. You can see this in Figure 14.
Now you’ve merged the PR, deleted the branch, and your And now I can stage, commit, and push my changes as follows.
changes are in main. You can feel free to also delete your
local “newchange” branch as follows: git add .
git commit -m "More changes"
git branch -d newchange git push
Figure 14: The Git status for bunch of stuff I just did. In my .gitignore file, I choose to put the following text:
env.txt
afolder/
At this point, I’m going to add, commit, and push. Now let’s
visit my repository on github.com and examine what it looks
like. This can be seen in Figure 16.
Figure 17: The working tree remains clean even after new files are added in ignored folders.
Interestingly, now my working folder is no longer clean— Running this command informs me of the changes Git made.
even though I made the change in a file that resides in a It didn’t remove the files from my disk though. To fully un-
folder that I’ve instructed to be ignored. This is because the derstand the changes, I then run a Git status command,
file “secondfile.md” was already being tracked. which tells me that it deleted .gitignore, but not really—it
now shows .gitignore as untracked. This means that now
Why is this useful? It’s useful for configuration settings, when I do a “git add .”, those untracked files will now be
such as web.config or .env files. It’s quite normal for de- tracked. But you know what won’t be tracked? The afolder/
velopers to check-in an .env.sample file instructing other secondfile.md won’t be tracked going forward.
developers who clone the repository to follow the structure
of .env.sample when they create their own .env files. This achieves my goal of telling Git, hey, really, stop tracking
this entire folder, just like my .gitignore instructs you to do.
This .env file is instructed to be ignored from the get go, but
the .env.sample file is not. This means that I can continue To put things simply, simply adding a .gitignore won’t cause
to maintain .env.sample and keep my instructions updated, Git to stop tracking files that are already being tracked
while the .env remains safely out of source control.
Now, go ahead and do a add, commit, and push. Now your echo 'even more stuff' >> dontreadme.md
Git repo should look like Figure 20. echo 'brand new file' > readme.md
You can imagine that “afolder” could be something like You can now run Git status to see what has changed. Here’s
node_modules, or something that you actually wanted to a trick. The output of Git status can be quite wordy. If you
get rid of. want to see a quick shorthand output, which may be useful
when you have a lot of files, use the following command:
Diff git status -s
When you’re working on a software project, you’re editing
files. This is your source code, and you need plenty of things The output of this command looks like this:
to help you keep control of what’s being committed. You’ve
already seen a Git command called Git status that lets you M dontreadme.md
do this at file level. But what about changes inside a file? ?? readme.md
Perhaps you want a good way to compare two versions of a
file and get a clear idea of what changes will be made if you This output tells you that the dontreadme.md file has been
push your changes. modified. But the readme.md file is unstaged.
Let’s understand this with an example. The current state of Now, go ahead and stage dontreadme.md.
my Git repo is shown in Figure 21.
git add dontreadme.md
As can be seen in Figure 21, I have one file in the root
called “dontreadme.md”, and a few other files and folders Run a Git status -s again. The output in text remains the
same, but notice closely that the “M” by dontreadme.md has
changed from red to green.
Listing 1: Output of git diff
diff --git a/dontreadme.md b/dontreadme.md Now append some more text to dontreadme.md. Don’t stage
index b84a4f4..d41a9fb 100644 this newly appended content and run Git status -s again. This
--- a/dontreadme.md time you’ll see that dontreadme.md status now says “MM”, one
+++ b/dontreadme.md M is green, and the other is red. This can be seen in Figure 22.
@@ -1,3 +1,4 @@
first file
more changes So now you have some content in remote, some content
even more stuff staged, and some in working copy, and all this content is
+so much stuff slightly different from each other.
git diff
It tries to color code it, but the color coding can frequently
get messed up over ssh sessions or your local settings.
If you want to compare staged with remote, you simply use Figure 23: Here, I’m diffing in VSCode like a champ.
this command:
First, instruct VSCode to act as the diffing tool for Git. To do I’m really curious to know: Do you consider yourself to be a
so, edit your gitconfig file: seasoned developer? Do you use Git regularly? Even in these
ultra-basic commands around Git usage, did you discover
git config --global -e anything new? What complex Git situations would you like
to see broken down in future articles? Do let me know.
Once your .gitconfig opens in your configured editor, add
the following lines at the bottom of it: git commit -m “That’s a wrap” && git push.
If you hit “Y” on that prompt, it opens VSCode, which then takes
care of showing you the diff. This can be seen in Figure 23.
You can also try “git difftool --staged” to view the staged diff.
Your MVC Applications Using JavaScript and jQuery: Part 2 the product and vehicle type repositories to the correspond-
(https://www.codemag.com/Article/2111031/Enhance-Your- ing private read-only fields defined in this class.
MVC-Applications-Using-JavaScript-and-jQuery-Part-2), you
learned about the starting MVC application, which was cod- The AddToCart() method is what’s called from jQuery Ajax to
ed using all server-side C#. You then added JavaScript and insert a product into the shopping cart that’s stored in the
jQuery to avoid post-backs and enhance the UX in various Session object. This code is similar to the code written in
ways. If you haven’t already read these articles, I highly rec- the MVC controller class ShoppingController.Add() method.
ommend that you read them to learn about the application After adding the id passed in by Ajax, a status code of 200
you’re enhancing in this series of articles. is passed back from this Web API call to indicate that the
Paul D. Sheriff product was successfully added to the shopping cart. At this
http://www.pdsa.com In this article, you’re going to build Web API calls that you point, you have everything you need on the back-end to add
can call from the application to avoid post-backs. You’re go- a product to the shopping cart via an Ajax call.
Paul has been in the IT ing to add calls to add, update, and delete shopping cart
industry over 33 years. information. In addition, you’re going to learn to work with Modify the Add to Cart Link
In that time, he has suc- dependent drop-down lists to also avoid post-backs. Finally, It’s now time to modify the client-side code to take advan-
cessfully assisted hundreds
you learn to use jQuery auto-complete instead of a drop- tage of this new Web API method. You no longer want a
of companies to architect
down list to provide more flexibility to your user. post-back to occur when you click on the Add to Cart link,
software applications to
so you need to remove the asp- attributes and add code
solve their toughest business
to make an Ajax call. Open the Views\Shopping\_Shop-
problems. Paul has been The Problem: Adding to Shopping pingList.cshtml file and locate the Add to Cart <a> tag and
a teacher and mentor
through various mediums
Cart Requires a Post-Back remove the asp-action=”Add” and the asp-route-id=”@
such as video courses, On the Shopping page, each time you click on an Add to Cart item.ProductId” attributes. Add id and data- attributes,
blogs, articles, and speaking button (Figure 1), a post-back occurs and the entire page and an onclick event, as shown in the code snippet below.
engagements at user is refreshed. This takes time and causes a flash on the page
groups and conferences that can be annoying to the users of your site. In addition,
around the world. it takes time to perform this post-back because all the data
Paul has 23 courses in the must be retrieved from the database server, the entire page
www.pluralsight.com library needs to be rebuilt on the server side, and then the browser
(http://www.pluralsight. must redraw the entire page. All of this leads to a poor user
com/author/paul-sheriff) experience.
on topics ranging from
JavaScript, Angular, MVC, The Solution: Create a Web API Call
WPF, XML, jQuery, and The first thing to do is to create a new Web API control-
Bootstrap. Contact Paul ler to handle the calls for the shopping cart functionality.
at [email protected]. Right mouse-click on the PaulsAutoParts project and create
a new folder named ControllersApi. Right mouse-click on
the ControllersApi folder and add a new class named Shop-
pingApiController.cs. Remove the default code in the file
and add the code shown in Listing 1 to this new class file.
Add two attributes before this class definition to tell .NET that
this is a Web API controller and not an MVC page control-
ler. The [ApiController] attribute enables some features such
as attribute routing, automatic model validation, and a few
other API-specific behaviors. When using the [ApiController]
attribute, you must also add the [Route] attribute. The route
attribute adds the prefix “api” to the default “[controller]/
[action]” route used by your MVC page controllers. You can
choose whatever prefix you wish, but the “api” prefix is a
standard convention that most developers use.
In the constructor for this API controller, inject the AppSes- Figure 1: Adding an item to the shopping cart can be more
sion, and the product and vehicle type repositories. Assign efficiently handled using Ajax.
18 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
Listing 1: Create a new Web API controller with methods to eliminate post-backs
using Microsoft.AspNetCore.Http; VehicleTypeSearch> _vehicleRepo;
using Microsoft.AspNetCore.Mvc; #endregion
using PaulsAutoParts.AppClasses;
using PaulsAutoParts.Common; #region AddToCart Method
using PaulsAutoParts.EntityLayer; [HttpPost(Name = "AddToCart")]
using PaulsAutoParts.ViewModelLayer; public IActionResult AddToCart([FromBody]int id)
{
namespace PaulsAutoParts.ControllersApi // Set Cart from Session
{ ShoppingViewModel vm = new(_repo, _vehicleRepo,
[ApiController] UserSession.Cart);
[Route("api/[controller]/[action]")]
public class ShoppingApiController : AppController // Set "Common" View Model Properties from Session
{ base.SetViewModelFromSession(vm, UserSession);
#region Constructor
public ShoppingApiController(AppSession session, // Add item to cart
IRepository<Product, ProductSearch> repo, vm.AddToCart(id, UserSession.CustomerId.Value);
IRepository<VehicleType,
VehicleTypeSearch> vrepo) : base(session) // Set cart into session
{ UserSession.Cart = vm.Cart;
_repo = repo;
_vehicleRepo = vrepo; return StatusCode(StatusCodes.Status200OK, true);
} }
#endregion #endregion
}
#region Private Fields }
private readonly IRepository<Product, ProductSearch> _repo;
private readonly IRepository<VehicleType,
<a class="btn btn-info" Listing 2: Add three methods in the pageController closure to modify the shopping cart
id="updateCart" function modifyCart(id, ctl) {
data-isadding="true" // Are we adding or removing?
onclick="pageController if (Boolean($(ctl).data("isadding"))) {
.modifyCart(@item.ProductId, this)"> // Add product to cart
addToCart(id);
Add to Cart // Change the button
</a> $(ctl).text("Remove from Cart");
$(ctl).data("isadding", false);
When you post back to the server, a variable in the view $(ctl).removeClass("btn-info")
.addClass("btn-danger");
model class is set on each product to either display the Add }
to Cart link or the Remove from Cart link. When using client- else {
side code, you’re going to toggle the same link to either // Remove product from cart
removeFromCart(id);
perform the add or the remove. Use the data-isadding at- // Change the button
tribute on the anchor tag to determine whether you’re do- $(ctl).text("Add to Cart");
ing an add or a remove. $(ctl).data("isadding", true);
$(ctl).removeClass("btn-danger")
.addClass("btn-info");
Add Code to Page Closure }
The onclick event in the anchor tag calls a method on the }
pageController called modifyCart(). You pass to this cart
the current product ID and a reference to the anchor tag function addToCart(id) {
}
itself. Add this modifyCart() method by opening the Views\
Shopping\Index.cshtml file and adding the three private function removeFromCart(id) {
methods (Listing 2) to the pageController closure: modi- }
fyCart(), addToCart(), and removeFromCart(). The modify-
Cart() method is the one that’s made public; the other two
are called by the modifyCart() method. Create the addToCart() Method
Write the addToCart() method in the pageController closure
The modifyCart() method checks the value in the data-isad- to call the new AddToCart() method you added in the Shop-
ding attribute to see if it’s true or false. If it’s true, call the pingApiController class. Because you’re performing a post,
addToCart() method, change the link text to “Remove from you may use either the jQuery $.ajax() or $.post() methods.
Cart”, set the data-isadding=”false”, remove the class I chose to use the $.post() method in the code shown in fol-
“btn-info”, and add the class “btn-danger”. If false, call the lowing snippet.
removeFromCart() method and change the attributes on the
link to the opposite of what you just set. Modify the return function addToCart(id) {
object to expose the modifyCart() method. let settings = {
url: "/api/ShoppingApi/AddToCart",
return { contentType: "application/json",
"setSearchArea": setSearchArea, data: JSON.stringify(id)
"modifyCart": modifyCart }
} $.post(settings)
codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 19
.done(function (data) { Remove from Cart immediately. Click on the “0 Items in
console.log( Cart” link in the menu bar and you should see an item in the
"Product Added to Shopping Cart"); cart. Don’t worry about the “0 Items in Cart” link; you’ll fix
}) that a little later in this article.
.fail(function (error) {
console.error(error);
});
The Problem: Delete from Shopping
} Cart Requires a Post-Back
Now that you’ve added a product to the shopping cart using
Try It Out Ajax, it would be good to also remove an item from the cart
Run the application and click on the Shop menu. Perform using Ajax. The link on the product you just added to the
a search to display products on the Shopping page. Click cart should now be displaying Remove from Cart (Figure 2).
on one of the Add to Cart links to add a product to the This was set via the JavaScript you wrote in the addToCart()
shopping cart. You should notice that the link changes to method. The data-isadding attribute has been set to a false
value, so when you click on the link again, the code in the
modifyCart() method calls the removeFromCart() method.
20 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
.done(function (data) { Listing 4: Add a modifyItemsInCartText() method to the mainController closure
console.log( function modifyItemsInCartText(isAdding) {
"Product Removed from Shopping Cart"); // Get text from <a> tag
}) let value = $("#itemsInCart").text();
let count = 0;
.fail(function (error) { let pos = 0;
console.error(error);
}); // Find the space in the text
pos = value.indexOf(" ");
} // Get the total # of items
count = parseInt(value.substring(0, pos));
Try It Out
Run the application and click on the Shop menu. Perform // Increment or Decrement the total # of items
if (isAdding) {
a search to display products on the Shopping page. Click count++;
on one of the Add to Cart links to add a product to the }
shopping cart. You should notice that the link changes to else {
count--;
Remove from Cart immediately. Click on the “0 Items in }
Cart” link in the menu bar and you should see an item in the
cart. Click on the back button on your browser and click the // Create the text with the new count
value = count.toString() + " " + value.substring(pos);
Remove from Cart link on the item you just added. Click on // Put text back into the cart
the “0 Items in Cart” link and you should see that there are $("#itemsInCart").text(value);
no longer any items in the shopping cart. }
The Problem: The “n Items in Cart” return { Getting the Sample Code
Link isn’t Updated "pleaseWait": pleaseWait,
You can download the sample
After modifying the code in the previous section to add and "disableAllClicks": disableAllClicks,
code for this article by visiting
remove items from the shopping cart using Ajax, you no- "setSearchValues": setSearchValues,
www.CODEMag.com under
ticed that the “0 Items in Cart” link in the menu bar isn’t up- "isSearchFilledIn": isSearchFilledIn, the issue and article, or by
dating with the current number of items in the cart. That’s "setSearchArea": setSearchArea, visiting www.pdsa.com/
because this link is generated by data from the server-side. "modifyItemsInCartText": modifyItemsInCartText downloads. Select “Articles”
Because you’re bypassing server-side processing with Ajax } from the Category drop-
calls, you need to update this link yourself. down. Then select “Enhance
Call this function after making the Ajax call to either add or your MVC Applications using
The Solution: Add Client-Side Code to Update Link remove an item from the cart. Open the Views\Shopping\ JavaScript and jQuery: Part 3”
Open the Views\Shared\_Layout.cshtml file and locate the Index.cshtml file and locate the done() method in the ad- from the Item drop-down.
“Items in Cart” link. Add an id attribute to the <a> tag and dToCart() method. Add the line shown just before the con-
assign it the value of “itemsInCart”, as shown in the follow- sole.log() statement.
ing code snippet.
$.post(settings)
<a id="itemsInCart" .done(function (data) {
class="text-light" mainController.modifyItemsInCartText(true);
asp-action="Index" console.log("Product Added to Shopping Cart");
asp-controller="Cart"> })
@ViewData["ItemsInCart"] Items in Cart // REST OF THE CODE HERE
</a>
Locate the done() method in the removeFromCart() method
Create a new method to increment or decrement the “Items and add the line of code just before the console.log() state-
in Cart” link. Open the wwwroot\js\site.js file and add a ment, as shown in the following code snippet.
new method named modifyItemsInCartText() to the main-
Controller closure, as shown in Listing 4. An argument is $.ajax({
passed to this method to specify whether you’re adding url: "/api/ShoppingApi/RemoveFromCart/" + id,
or removing an item from the shopping cart. This tells the type: "DELETE"
method to either increment the number or decrement the })
number of items in the text displayed on the menu. .done(function (data) {
mainController.modifyItemsInCartText(false);
The modifyItemsInCartText() method extracts the text por- console.log("Product Removed from Shopping Cart");
tion from the <a> tag holding the “0 Items in Cart”. It calcu- })
lates the position of the first space in the text, which allows // REST OF THE CODE HERE
you to parse the numeric portion, turn that into an integer,
and place it into the variable named count. If the value Try It Out
passed into the isAdding parameter is true, then count Run the application and click on the Shop menu. Perform
is incremented by one. If the value passed is false, then a search to display products on the Shopping page. Click
count is decremented by one. The new numeric value is then on one of the Add to Cart links to add a product to the
placed where the old numeric value was in the string and shopping cart and notice the link changes to Remove from
this new string is inserted back into the <a> tag. Expose the Cart immediately. You should also see the “Items in Cart”
modifyItemsInCartText()method from the return object on link increment. Click on the Remove from Cart link and you
the mainController closure, as shown in the following code. should see the “Items in Cart” link decrement.
codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 21
The Problem: Dependent Drop- method on the ShoppingViewModel class is called to set the
Makes property with the collection of vehicle makes that are
Downs Requires Multiple Post-Backs valid for that year. The set of vehicle makes is returned from
A common user interface problem to solve is that when you this Web API method.
choose an item from a drop-down, you then need a drop-down
immediately following to be filled with information specific to Next, add another new method named GetModels() to the
that selected item. For example, run the application and se- ShoppingApiController class to retrieve all models for a spe-
lect the Shop menu to get to the Shopping page. In the left- cific year and make as shown in Listing 6. In this method,
hand search area, select a Vehicle Year from the drop-down both a year and a vehicle make are passed in. The GetMod-
list (Figure 3). Notice that a post-back occurs and now a list els() method on the ShoppingViewModel class is called to
of Vehicle Makes are filled into the corresponding drop-down. populate the Models property with all vehicle models for
Once you choose a make, another post-back occurs and a list that specific year and make. The collection of vehicle models
of vehicle models is filled into the last drop-down. Notice the is returned from this Web API method.
flashing of the page that occurs each time you change the
year or make caused by the post-back. Modify Shopping Cart Page
It’s now time to add a couple of methods to your shopping
The Solution: Connect All Drop-Downs to Web API Services cart page to call these new Web API methods you added
To eliminate this flashing, create Web API calls to return to the ShoppingApiController class. Open the Views\Shop-
makes and models. After selecting a year from the Vehicle ping\Index.cshtml file and add a method to the pageCon-
Year drop-down, an Ajax call is made to retrieve all makes troller named getMakes(), as shown in Listing 7.
for that year in a JSON format. Use jQuery to build a new
set of <option> objects for the Vehicle Make drop-down. The The getMakes() method retrieves the year selected by the
same process can be done for the Vehicle Model drop-down user. It then clears the drop-down that holds all vehicle
as well. makes and the one that holds all vehicle models. Next, a
call is made to the GetMakes() Web API method using the
Open the ControllersApi\ShoppingApiController.cs file and $.get() shorthand method. If the call is successful, use the
add a new method named GetMakes() to get all makes of ve- jQuery each() method on the data returned to iterate over
hicles for a specific year, as shown in Listing 5. This method the collection of vehicle makes returned. For each make,
accepts the year of the vehicle to search for. The GetMakes() build an <option> element with the vehicle make within the
<option> and append that to the drop-down.
Listing 5: The GetMakes() method returns all vehicles makes for a specific year Add another method to the pageController named getMod-
[HttpGet("{year}", Name = "GetMakes")] els(), as shown in Listing 8. The getModels() method re-
public IActionResult GetMakes(int year) trieves both the year and make selected by the user. Clear
{ the models drop-down list in preparation for loading the
IActionResult ret;
new list. Call the GetModels() method using the $.get()
// Create view model shorthand method. If the call is successful, use the jQuery
ShoppingViewModel vm = new(_repo,
_vehicleRepo, UserSession.Cart);
return ret;
}
Listing 6: The GetModels() method returns all vehicle models for a specific year and model
[HttpGet("{year}/{make}", Name = "GetModels")]
public IActionResult GetModels(int year,
string make)
{
IActionResult ret;
22 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
each() method on the data returned to iterate over the col- asp-items="@(new SelectList(Model.Makes))">
lection of vehicle models returned. For each model, build an </select>
<option> element with the vehicle model within the <op-
tion> and append that to the drop-down. Try It Out
Run the application and click on the Shop menu. Expand the
Because you added two new private methods to the page- “Search by Year/Make/Model” search area and select a year
Controller closure, you need to expose these two methods from the drop-down. The vehicle makes are now filled into
by modifying the return object, as shown in the following the drop-down, but the page didn’t flash because there’s
code snippet. no longer a post-back. If you select a vehicle make from the
drop-down, you should see the vehicle models filled in, but
return { again, the page didn’t flash because there was no post-back.
"setSearchArea": setSearchArea,
"modifyCart": modifyCart,
"getMakes": getMakes, The Problem: Allow a User to
"getModels": getModels Either Select an Existing Category
}
or Add a New One
Now that you have the new methods written and exposed Click on the Admin > Products menu, then click the Add
from your pageController closure, hook them up to the appro- button to allow you to enter a new product (Figure 4). No-
priate onchange events of the drop-downs for the year and tice that the Category field is a text box. This is fine if you
make within the search area on the page. Locate the <select> want to add a new Category, but what if you want the user
element for the SearchEntity.Year property and modify the
onchange event to look like the following code snippet.
Listing 7: The getMakes() method retrieves vehicle makes and builds a drop-down
<select class="form-control" function getMakes(ctl) {
onchange="pageController.getMakes(this);" // Get year selected
asp-for="SearchEntity.Year" let year = $(ctl).val();
asp-items="@(new SelectList(Model.Years))">
// Search for element just one time
</select> let elem = $("#SearchEntity_Make");
Next, locate the <select> element for the SearchEntity.Make // Clear makes drop-down
property and modify the onchange event to look like the elem.empty();
// Clear models drop-down
following code snippet. $("#SearchEntity_Model").empty();
Listing 8: The getModels() method retrieves vehicle models and builds a drop-down
function getModels(ctl) {
// Get currently selected year
let year = $("#SearchEntity_Year").val();
// Get model selected
let model = $(ctl).val();
$.get("/api/ShoppingApi/GetModels/" +
year + "/" + model, function (data) {
// Load the makes into drop-down
$(data).each(function () {
elem.append(`<option>${this}</option>`);
});
})
.fail(function (error) {
Figure 4: jQuery has validation capabilities as well as an console.error(error);
auto-complete that can make your UI more responsive and });
avoid post-backs }
codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 23
to be able to select from the existing categories already as- Modify the Shopping View Model Class
signed to products? You could switch this to a drop-down Instead of calling the repository methods directly from
list, but then the user could only select an existing category controller classes, it’s best to let your view model class call
and wouldn’t be able to add a new one on the fly. What these methods. Open the ShoppingViewModel.cs file in the
would be ideal is to use a text box, but also have a drop- PaulsAutoParts.ViewModeLayer project. Add a new method
down component that shows them the existing categories to this class named SearchCategories() that makes the call
as they type in a few letters into the text box. to the repository class method you just created.
Modify the Product Repository Class Add New Web API to Controller
First, you need to make some changes to the back-end to It’s now time to create the Web API method for you to call
support searching for categories by finding where a category via Ajax to search for the categories based on each charac-
starts with the text the user types in. Open the Repository- ter the user types into the text box. Open the Controller-
Classes\ProductRepository.cs file in the PaulsAutoParts. sApi\ShoppingApiController.cs file and add a new method
DataLayer project and add a new method named Search- named SearchCategories() to this controller class, as shown
Categories() to this class. This method takes the characters in Listing 9. This method accepts the character(s) typed
entered by the user and queries the database to retrieve into the Category text box; if it’s blank, it returns all catego-
only those categories that start with those characters, as ries, and otherwise passes the search value to the Search-
shown in the following code snippet. Categories() method you just created.
This LINQ query roughly translates to the following SQL @section HeadStyles {
query. <link rel="stylesheet"
href="//code.jquery.com/ui/1.12.1
SELECT DISTINCT Category /themes/base/jquery-ui.css">
FROM Product }
WHERE Category LIKE 'C%'
Now go to the bottom of the file and in the @section
Scripts and just before your opening <script> tag, add the
Listing 9: The SearchCategories() method performs a search for categories based on user input following <script> tag to include the jQuery UI JavaScript
[HttpGet("{searchValue}", file. Please note that for printing this article, I had to break
Name = "SearchCategories")] the src attribute into two lines. When you put this into your
public IActionResult SearchCategories( cshtml file, be sure to put it all on one line.
string searchValue)
{
IActionResult ret; <script src="https://code.jquery.com/ui
/1.12.1/jquery-ui.js">
ShoppingViewModel vm = new(_repo, </script>
_vehicleRepo, UserSession.Cart);
if (string.IsNullOrEmpty(searchValue)) {
Add a new method to the pageController closure named
// Get all product categories categoryAutoComplete(). The categoryAutoComplete()
vm.GetCategories(); method is the publicly exposed method that’s called from
the $(document).ready() to hook up the auto-complete to
// Return all categories the category text box using the autocomplete() method.
ret = StatusCode(StatusCodes.Status200OK,
vm.Categories); Pass in an object to the autocomplete() method to set the
} source property to a method named searchCategories(),
else { which is called to retrieve the category data to display in
// Search for categories the drop-down under the text box. The minLength prop-
ret = StatusCode(StatusCodes.Status200OK,
vm.SearchCategories(searchValue)); erty is set to the minimum number of characters that must
} be typed prior to making the first call to the searchCatego-
ries() method. I’ve set it to one, so the user must type in at
return ret; least one character in order to have the searchCategories()
}
method called.
24 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
function categoryAutoComplete() { Vehicle Type maintenance page, when the user wants to add
// Hook up Category auto-complete a new vehicle, they should be able to either add a new make
$("#SelectedEntity_Category").autocomplete({ or select from an existing one.
source: searchCategories,
minLength: 1 Add New Method to Vehicle Type Repository
}); Let’s start with modifying the code on the server to support
} searching for vehicle makes. Open the RepositoryClasses\
VehicleTypeRepository.cs file and add a new method named
Add the searchCategories() method that’s called from the SearchMakes(), as shown in the following code snippet.
source property. This method must accept a request object
and a response callback function. This method uses the public List<string> SearchMakes(string make)
$.get() method to make the Web API call to the SearchCat- {
egories() method passing in the request.term property, return _DbContext.VehicleTypes
which is the text the user entered into the category text .Select(v => v.Make).Distinct()
box. If the call is successful, the data retrieved back from .Where(v => v.StartsWith(make))
the Ajax call is sent back via the response callback function. .OrderBy(v => v).ToList();
}
function searchCategories(request, response) {
$.get("/api/ShoppingApi/SearchCategories/" + This LINQ query roughly translates to the following SQL
request.term, function (data) { query:
response(data);
}) SELECT DISTINCT Make
.fail(function (error) { FROM Lookup.VehicleType
console.error(error); WHERE Make LIKE 'C%'
});
} Add a New Method to Vehicle Type View Model Class
Instead of calling the repository methods directly from
Modify the return object to expose the categoryAutoCom- controller classes, it’s best to let your view model class call
plete() method.
return {
"setSearchValues": setSearchValues,
"setSearchArea": mainController.setSearchArea, ADVERTISERS INDEX
"isSearchFilledIn":
mainController.isSearchFilledIn,
"categoryAutoComplete": categoryAutoComplete Advertisers Index
}
CODE Consulting
Finally, modify the $(document).ready() function to call the www.codemag.com/code 7
pageController.categoryAutoComplete() method to hook
up jQuery UI to the Category text box. CODE Consulting
www.codemag.com/onehourconsulting 75
$(document).ready(function () {
CODE Legacy
// Setup the form submit
www.codemag.com/modernize 76
mainController.formSubmit();
DevIntersection
// Hook up category auto-complete www.devintersection.com 2
pageController.categoryAutoComplete();
dtSearch
www.dtSearch.com 11
// Collapse search area or not? Advertising Sales:
pageController.setSearchValues(); LEAD Technologies Tammy Ferguson
832-717-4445 ext 26
// Initialize search area on this page www.leadtools.com 5 [email protected]
pageController.setSearchArea();
Live on Maui
});
www.live-on-maui.com 38
Try It Out
Run the application and select the Admin > Products menu.
Click on the Add button and click into the Category text box.
Type the letter T in the Category input field and you should see
a drop-down appear of categories that start with the letter T.
codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 25
Listing 10: Create a new Web API controller for handling vehicle type maintenance
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc; #region Private Fields
using PaulsAutoParts.AppClasses; private readonly IRepository<VehicleType,
using PaulsAutoParts.Common; VehicleTypeSearch> _repo;
using PaulsAutoParts.DataLayer; #endregion
using PaulsAutoParts.EntityLayer;
using PaulsAutoParts.ViewModelLayer; #region SearchMakes Method
[HttpGet("{make}", Name = "SearchMakes")]
namespace PaulsAutoParts.ControllersApi public IActionResult SearchMakes(string make)
{ {
[ApiController] IActionResult ret;
[Route("api/[controller]/[action]")]
public class VehicleTypeApiController : VehicleTypeViewModel vm = new(_repo);
AppController
{ // Return all makes found
#region Constructor ret = StatusCode(StatusCodes.Status200OK,
public VehicleTypeApiController( vm.SearchMakes(make));
AppSession session,
IRepository<VehicleType, return ret;
VehicleTypeSearch> repo) : base(session) }
{ #endregion
_repo = repo; }
} }
#endregion
these methods. Open the VehicleTypeViewModel.cs file in Add a new method to the pageController closure named
the PaulsAutoParts.ViewModeLayer project. Add a new makesAutoComplete(). The makesAutoComplete() method
method to this class named SearchMakes() that makes the is the publicly exposed method that is called from the
call to the repository class method you just created. $(document).ready() to hook up the jQuery auto-complete
to the vehicle makes text box, just like you did for hooking
public List<string> SearchMakes( up the category text box. Pass in an object to this method
string searchValue) that sets the source property to a method named search-
{ Make(), which is called to retrieve the vehicle makes to dis-
return ((VehicleTypeRepository)Repository) play in the drop-down under the text box. The minLength
.SearchMakes(searchValue); property is set to the minimum number of characters that
} must be typed prior to making the first call to the search-
Makes() method. I’ve set it to one, so the user must type in
Create New API Controller Class at least one character in order to have the searchMakes()
Add a new class under the ControllersApi folder named Ve- method called.
hicleTypeApiController.cs. Replace all the code within the
new file with the code shown in Listing 10. Most of this function makesAutoComplete() {
code is boiler-plate for a Web API controller. The important // Hook up Makes auto-complete
piece is the SearchMakes() method that’s going to be called $("#SelectedEntity_Make").autocomplete({
from jQuery Ajax to perform the auto-complete. source: searchMakes,
minLength: 1
Add jQuery UI Code to Vehicle Type Page });
Open the Views\VehicleType\VehicleTypeIndex.cshtml }
file and, at the top of the page, just below the setting of
the page title, add a new section to include the jquery- Add the searchMakes() method that is called from the source
ui.css file. This is needed for styling the auto-complete property. This method must accept a request object and a
drop-down. response callback function. This method uses the $.get()
method to make the Web API call to the SearchMakes()
@section HeadStyles { method passing in the request.term property, which is the
<link rel="stylesheet" text the user entered into the vehicle makes text box. If the
href="//code.jquery.com/ui/1.12.1 call is successful, the data retrieved back from the Ajax call
/themes/base/jquery-ui.css"> is sent back via the response callback function.
}
function searchMakes(request, response) {
Now go to the bottom of the file and in the @section $.get("/api/VehicleTypeApi/SearchMakes/" +
Scripts and just before your opening <script> tag, add the request.term, function (data) {
following <script> tag to include the jQuery UI JavaScript response(data);
file. })
.fail(function (error) {
<script src="https://code.jquery.com/ui console.error(error);
/1.12.1/jquery-ui.js"> });
</script> }
26 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
Modify the return object to expose the makesAutoCom- Listing 11: The searchModels() method passes three values to the SearchModels() Web API
plete() method.
function searchModels(request, response) {
let year = $("#SelectedEntity_Year").val();
return { let make = $("#SelectedEntity_Make").val();
"setSearchValues": setSearchValues,
"setSearchArea": if (make) {
mainController.setSearchArea, $.get("/api/VehicleTypeApi/
"isSearchFilledIn": SearchModels/" + year + "/" +
make + "/" +
mainController.isSearchFilledIn,
request.term, function (data) {
"addValidationRules": addValidationRules, response(data);
"makesAutoComplete": makesAutoComplete })
} .fail(function (error) {
console.error(error);
Finally, modify the $(document).ready() function to call the });
}
pageController.makesAutoComplete() method to hook up
else {
jQuery UI to the vehicle makes text box. searchModels(request, response);
}
$(document).ready(function () { }
// Add jQuery validation rules
pageController.addValidationRules();
codemag.com Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 27
Listing 12: Hook up the auto-complete functionality by calling the appropriate method in the pageController closure
$(document).ready(function () {
// Add jQuery validation rules // Setup the form submit
pageController.addValidationRules(); mainController.formSubmit();
SPONSORED SIDEBAR: VehicleTypeViewModel vm = new(_repo); Modify the $(document).ready() function to make the call
to the pageController.modelsAutoComplete() method, as
Need FREE Project Advice? // Return all models found shown in Listing 12.
CODE Can Help! ret = StatusCode(StatusCodes.Status200OK,
vm.SearchModels(year, make, model)); Try It Out
No strings, free advice on Run the application and click on the Admin > Vehicle Types
new or existing software return ret; menu. Click on the Add button and put the value 2000 into
development projects.
} the Year text box. Type/Select the value Chevrolet in the
CODE Consulting experts
Makes text box. Type the letter C in the Models text box and
have experience in cloud,
Web, desktop, mobile,
Modify the Page Controller Closure you should see a few models appear.
microservices, containers, Open the Views\VehicleType\VehicleTypeIndex.cshtml
and DevOps projects. file. Add a new method to the pageController closure
Schedule your free hour named modelsAutoComplete(). The modelsAutoComplete()
of CODE call with our method is the publicly exposed method called from the Calling Web API methods
expert consultants today. $(document).ready() to hook up the auto-complete to the from jQuery Ajax can greatly
For more information, models text box using the autocomplete() method. Pass
visit www.codemag.com/ in an object to this method to set the source property speed up the performance
consulting or email us at to the function to call to get the data to display in the of your Web pages
[email protected]. drop-down under the text box. The minLength property
is set to the minimum number of characters that must be
typed prior to making the first call to the searchModels()
function.
Summary
function modelsAutoComplete() { In this article, you once again added some functionality to
// Hook up Models AutoComplete improve the user experience of your website. Calling Web
$("#SelectedEntity_Model").autocomplete({ API methods from jQuery Ajax can greatly speed up the per-
source: searchModels, formance of your Web pages. Instead of having to perform
minLength: 1 a complete post-back and redraw the entire Web page, you
}); can retrieve a small amount of data and update just a small
} portion of the Web page. Eliminating post-backs is probably
one of the best ways to improve the user experience of your
Add the searchModels() method (Listing 11) that’s called Web pages. Another technique you learned in this article
from the source property. This method must accept a re- was to take advantage of the jQuery UI auto-complete func-
quest object and a response callback function. This method tionality.
uses the $.get() method to make the Web API call to the
SearchModels() method passing in the year, make, and the Paul D. Sheriff
request.term property, which is the text the user entered
into the vehicle model text box. If the call is successful, the
data retrieved back from the Ajax call is sent back via the
response callback function.
return {
"setSearchValues": setSearchValues,
"setSearchArea":
mainController.setSearchArea,
"isSearchFilledIn":
mainController.isSearchFilledIn,
"addValidationRules": addValidationRules,
"makesAutoComplete": makesAutoComplete,
"modelsAutoComplete": modelsAutoComplete
}
28 Enhance Your MVC Applications Using JavaScript and jQuery: Part 3 codemag.com
ONLINE QUICK ID 2201041
Software Development is
Getting Harder, Not Easier
I hate that expression, “It’s complicated.” When people tell me it’s complicated, it almost always isn’t. What they’re really saying
is that they don’t want to talk about it. But when we’re talking about software development, it really IS complicated. Software
development is notoriously difficult and it’s getting steadily harder. And we need to talk about it. Imagine a world where a
developer who understands the basics of programming who previously touted giving teams complete autonomy
(things like variables, branching, and looping) can spend over their platforms and toolsets, to choosing and stan-
a weekend reading a manual cover to cover and become dardizing on a smaller subset. These companies are coping
productive in a new environment by Monday morning. An with too many choices, which leads to both decision pa-
environment that includes built-in UI, database and report- ralysis and an inability to get everyone on the same page.
ing tools, its own IDE, integration paths for third-party Constraining the number of choices is one way to combat
systems, and everything else they might need to build and complexity.
maintain a system. A world where the tool is only updat-
ed about once every two years or so and where reading a
couple of articles or attending a conference for a few days Mike Yeager
gets them up to speed on ALL the new stuff. A world where Too many choices leads to both www.internet.com
developers get very skilled at using the tools and become
masters of their craft.
decision paralysis and an inability Mike is the CEO of EPS’s
to get everyone on the same page. Houston office and a skilled
That world existed between 20 and 30 years ago. Since then, .NET developer. Mike excels
at evaluating business
the developer’s world has accelerated and expanded at an
requirements and turning
increasing pace. Today, just keeping up with what’s avail-
them into results from
able to us is like drinking from a fire hose. Where do we even Higher expectations, larger systems, and larger teams
development teams.
find time to write code? are other drivers of complexity. Development is no longer He’s been the Project Lead
just getting the code to function and doing some perfor- on many projects at EPS and
mance tuning. It also has to be secure, scalable, a great promotes the use of modern
user experience, accessible from a variety of devices, cloud- best practices, such as
Before you go thinking this article deployable, tested, maintainable, kept up to date, and con- the Agile development
is about nostalgia, it isn’t. tinuously improved. Also, at one time, most apps targeted paradigm, use of design
internal users; now most apps are targeted at consumers, patterns, and test-drive
which ups the bar considerably. and test-first development.
Before coming to EPS,
I believe in the original definition of nostalgia, as a medi- The move toward both more systems and larger systems Mike was a business owner
cal diagnosis for grave illness, one that couldn’t be cured, requires more teams and more developers, which leads to developing a high-profile
even when the patient was lucky enough to be able to try to difficulties getting everyone rowing the boat in the same di- software business in the
relive those memories. I believe we can’t—and shouldn’t— rection. When I say teams in this article, I don’t just mean a leisure industry. He grew the
go back, only forward. But all hope is not lost. Let’s start development team within a company. I also mean the team business from two employees
a conversation about complexity and begin to deal with it. developing the toolsets and platforms and those consuming to over 30 before selling the
what the team is building. We must work with teams both company and looking
Where is complexity coming from? I believe it stems from inside and outside our walls. This complexity can be miti- for new challenges.
Implementation experience
the plethora of platforms and toolsets, higher expectations, gated by better collaboration.
includes .NET, SQL Server,
larger systems, bigger teams, faster delivery of new ideas,
Windows Azure, Microsoft
and an ever-expanding set of opinions. Today, a full-stack The collaboration problem isn’t unique to the software
Surface, and Visual FoxPro.
developer must have a good working knowledge of mul- industry, and we’re getting better at meeting these chal-
tiple user interface tools, databases, cloud offerings, and lenges, but the trends aren’t going to slow down or reverse
business logic tools. Even picking just one UI platform, say themselves.
HTML and JavaScript, there are hundreds of frameworks to
choose from and they need to be mixed and matched to get The biggest drivers of complexity, in my experience, and
a good result. New frameworks and major updates to exist- those we can do the most about, are last two I mentioned.
ing frameworks are coming out almost daily. No one can The faster delivery of new ideas and the ever-expanding set
make a perfect decision. of opinions have, ironically, sprung from our own attempts
to solve complexity issues and to make things easier. Today,
That’s just for development; I didn’t include new Web everyone who solves an obstacle to coding productivity can
assembly-centric UIs like Blazor. I also ignored desktop publish and promote that solution. The solutions are often
and native mobile applications and more, and included quite good and are easy to find on the Internet by anyone
choosing and learning an IDE to develop in. The number of having a similar problem. However, sometimes the solution
platforms and toolsets available today is astounding and becomes widely accepted and is thought of as the “right
the pendulum is now swinging back for many companies way” to code.
How, then, can we be better at using the right tools at the What can we do about the complexity that comes from the
right time? There’s no magic solution, only hard work, care- steady gushing of new ideas? I believe that we, as an in-
ful reflection, and creativity. Let’s examine the problem. dustry, should be more demanding of ourselves and of oth-
ers. Instead of looking only at all of the good that comes
from something new, we should demand to also know the
How to Solve the Problem of Complexity downsides and demand help in determining when and where
When we think about how to solve the problem of complex- these ideas can be used to our advantage. Often, when look-
ity, the answer invariably comes down to breaking down a ing at a new technology, the material is presented by diving
complex system into a series of smaller, more discrete, less in and showing how it works. There isn’t even a mention of
complex systems that aren’t too complex standing alone. the tradeoffs or even which problems are being solved.
This makes the new modules more approachable and the
goals easier to achieve. But nothing comes for free and the
catch is that we now have to communicate and coordinate
among these smaller, more discreet systems, which often Wouldn’t it be refreshing to read
about a new technology and
why it was created, which things
it does really well, and which things
it isn’t intended for instead of
enable CI/CD (https://www.redhat.com/en/topics/devops/ Sail picks up the database details from the current applica-
what-is-ci-cd) for the automated deployments on GCP. tion environment variables that you usually define inside
the .env file.
Now, you’ll go a step further and connect your app to a local
MySQL database. Then, you’ll introduce the Google Cloud Let’s have a look at the database section of the .env file:
SQL service and create your first SQL database in the cloud.
Right after that, I’ll show you one way to run Laravel data- DB_CONNECTION=mysql
base migrations from within the Cloud Build workflow. Fi- B_HOST=mysql
nally, I’ll enhance the Cloud Build workflow by showing you DB_PORT=3306
Bilal Haidar how to back up the SQL database every time you deploy a DB_DATABASE=laravel
[email protected] new version of the app on GCP. DB_USERNAME=sail
https://www.bhaidar.dev DB_PASSWORD=password
@bhaidar First things first, let’s locally connect the Laravel app to a
MySQL database. These are the default settings that ship with a new Laravel
Bilal Haidar is an
accomplished author, application using Sail service.
Microsoft MVP of 10 years,
ASP.NET Insider, and has
Create and Use a Local MySQL Database You can change the settings as you see fit. For now, I just
been writing for CODE In Part I of this series, I introduced Laravel Sail (https:// changed the database name to be gcp_app.
Magazine since 2007. laravel.com/docs/8.x/sail). It’s a service offered by the
Laravel team to dockerize your application locally. One of
With 15 years of extensive the containers that Sail creates locally, inside Docker, is the
experience in Web develop- mysql container. It holds a running instance of MySQL Da- In Laravel 8.x, you can use any of
ment, Bilal is an expert in tabase Service. Listing 1 shows the mysql service container
providing enterprise Web section inside the docker-compose.yaml file.
the following database systems:
solutions. MySQL 5.7+, PostgreSQL 9.6+,
When you start the Sail service, it creates a Docker contain- SQLite 3.8.8+, or SQL Server 2017.
He works at Consolidated
er for the mysql service and automatically configures it with
Contractors Company in
a MySQL Database that’s ready to use in your application.
Athens, Greece as a full-
stack senior developer.
Step 1: Create and Connect to a MySQL Database
Bilal offers technical For now, let’s keep them as they are and start up the Docker
consultancy for a variety containers using the Sail service command:
of technologies including
Nest JS, Angular, Vue JS, sail up --d
JavaScript and TypeScript.
This command starts up all services that the docker-com-
pose.yaml file hosts.
Let’s connect to the database that sail has created. I’m us-
ing TablePlus (https://tableplus.com/) Database Manage-
ment Tool to connect to the new database. Feel free to use
any other management tool of your own preference. Figure
1 shows the database connection window.
32 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 2: Open database connection
codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 33
\App\Http\Controllers\EditorController::class, The routes use the EditorController that you haven’t created
'store' yet. Let’s run a new Artisan command to create this controller:
]);
sail artisan make:controller EditorController
The first route allows users to access the view (GET) and has
a route name of editors.index. The second route, on the This command creates a new controller under the \App\
other hand, allows executing a POST request to create a new Http\Controllers folder.
editor record in the database.
The index() action retrieves all stored editors in the database
and returns the editors view together with the data to display.
Listing 2: Store() method
public function store(Request $request) public function index()
{ {
$request->validate([ $editors = Editor::all();
'name' => 'required', return view('editors', compact('editors'));
'company' => 'required',
'operating_system' => 'required', }
'license' => 'required',
]); The store() action takes care of storing a new editor record in
the database. It validates the POST request to make sure that
Editor::create($request->all());
all required fields are there. Then, it creates a new editor re-
return redirect()->route('editors.index') cord in the database. Finally, it redirects the user to the edi-
->with('success', 'Editor created successfully.'); tors.index route (you have defined this inside \routes\web.
} php). Listing 2 shows the store() action source code entirely.
34 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 3: Editors’ view in the browser
$table->string('name', 255); and make sure the database connection is up and running.
$table->string('company', 500); Figure 3 shows the Editors’ view in the browser.
$table->string('operating_system', 500);
$table->string('license', 255); That’s it! Now that you’ve successfully connected your app
$table->timestamp('created_at')->useCurrent(); to a local database, let’s explore Google Cloud SQL and con-
$table->timestamp('updated_at')->nullable(); figure the app to use one.
});
Before moving on, make sure you commit and push your
Inside the up() method, you’re creating the editors table change onto GitHub. Keep in mind that this action triggers
and specifying the columns that should go under it. Listing 4 the GCP Cloud Build Workflow to deploy a new version of
shows the Editor model class. your Laravel app.
Now that the migration and model are both ready, let’s run
the migration to create the new table in the database. Run
the following command: Google Cloud SQL is a fully
managed relational database
sail artisan migrate
that supports MySQL, PostgreSQL,
You can verify that this command has created the table by and Microsoft SQL Server
checking your database.
codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 35
Figure 4: Google Cloud SQL
In this section, you’re going to create your first Cloud SQL The next step is to select which database engine you’re go-
and connect it from your Laravel app running on the GAE ing to create. For this series, stick with a MySQL database.
(Google App Engine). Figure 5 shows the database engine offerings by GCP.
Log into your account at https://console.cloud.google. Select the Choose MySQL button. Next, GCP prompts you
com/ and navigate to the Cloud SQL section by selecting it to fill in the configuration details that GCP needs to create
from the left-side menu. Figure 4 shows where to locate the your MySQL instance. Figure 6 shows the MySQL instance
Cloud SQL on the main GCP menu. configuration settings.
The GCP (Google Cloud Platform) takes you through a few At a minimum, you need to input the following fields:
steps to help you easily create a new instance. Let’s start!
• Instance ID
Step 1: Create a MySQL Instance • Password (for the root MySQL instance user). Make sure
Locate and click the CREATE INSTANCE button. Follow the you remember this password as you’ll need it later.
steps to create your first Cloud SQL instance. • Database version. I’ll stick with MySQL 5.7 for now.
• Region (preferably the same region you picked for the
GAE app)
• Zonal availability (either single or multiple zones, de-
pending on your requirements and needs)
The rest of the fields are optional. Look at them in case you
want to change anything.
36 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 6: MySQL instance configuration settings
Click the CREATE DATABASE button to create a database for Locate and click the Connections on the left-side menu.
the Laravel app. Figure 8 shows the create database form. This page lists all the networking configurations that govern
your database instance.
Provide a name for the new database and click the CREATE
button. Just a few seconds later, you’ll see the new data- For now, keep selecting the Public API option. It allows you
base listed under the current instance list of databases. to connect to your database by using the Cloud SQL Proxy.
codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 37
Advertisement
WANT TO
LIVE ON
MAUI?
IF YOU CAN WORK FROM HOME,
WHY NOT MAKE PARADISE YOUR HOME?
The world has changed. Millions of people are working from home, and for many, that
will continue way past the current crisis. Which begs the question: If you can work from
home, then why not make your home in one of the world’s premiere destinations and
most desirable living areas?
The island of Maui in Hawai’i is not just a fun place to visit for a short vacation, but it
is uniquely situated as a place to live. It offers great infrastructure and a wide range of
things to do, not to mention a very high quality of life.
We have teamed up with CODE Magazine and Markus Egger to provide you information
about living in Maui. Markus has been calling Maui his home for quite some time, so
he can share his own experience of living in Maui and working from Maui in an industry
that requires great infrastructure.
MAUI NO KA OI!
This loosely translates to “Maui is the best”. As someone who has been calling Maui
home for a while now, I can wholeheartedly confirm this. After having travelled the world,
and after having lived in a variety of places, I find Maui to be truly unique.
Most people know Maui is a place to go for a week on worth considering also). I like this area for its great qual-
vacation. And that is certainly great and very enjoyable. ity of housing, low crime, great weather, and the world’s
However, Maui is so much more! To me, Maui is the per- greatest beaches. I enjoy playing a round of golf or go-
fect mix that makes me feel like I am living on a tropi- ing to a great restaurant with friends. When I want to
cal island yet being a developed place with great infra- feel like I’m on vacation for an hour or two, I swing by
structure and great quality of life. A lot of this is true for one of the hotels for a snack at the pool bar. When I feel
the Hawaiian Islands in general. But while Oahu (with its like exercising, I ride my bike along the ocean or go for
capital of Honolulu) is essentially a big city with a lot of a hike into the jungle or across lava fields.
people that always reminds me of Southern California,
and while islands like Kauai or the Big Island of Hawai’i As we have been going through the COVID-19 crisis, it
are a bit too “back to the roots” for me, Maui is just per- has become more and more clear how great a location
fect. You can enjoy a great day at the beach or in nature, Maui is. For one, the warm weather and outdoor living
or you can go to a nice restaurant, the movies, or a con- have kept the COVID-19 numbers low, and the quality
cert. It’s the quality of life provided by a modern place in of life high. And while nobody wants to be hospitalized,
the Western world, paired with a tropical island paradise. it has been nice to know that we have better healthcare
here than other tropical locations. (Essentially the same
Maui has many unique advantages. There is no hurri- healthcare as anywhere else in the US.) While we also
cane season and no real rainy season. The weather is had to deal with a lockdown, and many restaurants and
nice year-round, especially on the south-side of the is- hotels have been closed, we have several places that
land. There are no dangerous animals. Not even mos- are not just open, but since everything happens out-
quitoes. How does 82-degree weather on a nice beach doors, many are perfectly safe to visit. I really can’t think
with a Mai Tai or Pina Colada sound? That’s Maui for you! of a better place to weather this pandemic than Maui.
As someone who works in the tech industry, good in- So yes: Maui No Ka Oi! To me, there isn’t another place
frastructure is important to me. After all, I need to work that even comes close. I have a long list of other places
as productively from Maui as I do when I am on the I enjoy visiting that are awesome too. Do I want to go
“mainland” (which is what we call the continental US to Bora Bora, Singapore, or many other great loca-
here in Hawai’i). I have a 300Mbit internet connection tions? Sure, I do! But what do you do in Bora Bora after
going to my home, and it is inexpensive. My connec- two weeks? Maui on the other hand is a great place to
tion to other parts of the world is better and less ex- set up a life and stay for good.
pensive here than it is on most main-land locations.
We have the same stores and supermarkets as every- I would love to see you on Maui in the future. Maybe
where else in the US. Schools are decent. The same we can share one of those Mai Tais on the beach. I rec-
is true for healthcare. Flight connections are great, ommend talking to Carol Olsen, who has been helping
and it is not at all difficult to travel from and to Maui. me with all my real estate needs in South Maui. Moving
to Maui has been the best decision of my life. I am sure
I live on the south-side of the island in an area called you would enjoy it too!
“Kihei”. Especially the southern parts of Kihei, known
as “Wailea” and “Makena”, are the areas I truly rec- Markus Egger
ommend to anyone (although there are other places Publisher, CODE Magazine
MAUI PROPERTIES
You have multiple ways to use the Cloud SQL Auth Proxy. Replace the INSTANCE_CONNECTION_NAME with the real
You can start the Cloud SQL Auth proxy using TCP sockets, connection name.
Unix sockets, or the Cloud SQL Auth proxy Docker image. You
can read about how to use the Cloud SQL Auth Proxy here: Figure 9 shows the Cloud SQL Auth Proxy connected and
https://cloud.google.com/sql/docs/mysql/connect-admin- ready to establish a database connection to any database
proxy#tcp-sockets under the currently connected instance.
For this series, I’ll be using the TCP sockets connection. Use With TCP connections, the Cloud SQL Auth proxy listens
the following command to connect to the Cloud SQL instance: on localhost (127.0.0.1) by default. So when you specify
tcp:PORT_NUMBER for an instance, the local connection is at
./cloud_sql_proxy \ 127.0.0.1:PORT_NUMBER. Figure 10 shows how to connect to
-instances=INSTANCE_CONNECTION_NAME=tcp:3306 the db_1 using TablePlus (https://tableplus.com/).
42 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
I’ve highlighted the important fields that need your attention: On a terminal window, run the following command to re-
fresh the app configurations:
• Name: The name of the connection
• Host: The IP address of the server hosting the MySQL php artisan config:cache
database. In this case, it’s 127.0.0.1.
• User: The database user. In this case, you’re using the This command clears the old configurations and caches the
root user. new ones.
• Password: The user password. The root password that
you previously created for the MySQL instance. The app, when it runs, is now connected to the Cloud data-
• Database: The name of the database. In this case, it’s db_1. base. Hence, to run the Laravel migrations, you just need to
run the following command:
Click the Connect button to successfully connect to the
database. The database is still empty and you’ll fill it with php artisan migrate:fresh
Laravel tables in the next section.
Notice the use of php artisan rather than sail artisan. You
Step 4: Run Laravel Migrations on the New Database use the sail command only when interacting with the Sail
In Steps 2.4 and 2.5, you created a database migration and environment locally.
pushed the code to GitHub. By pushing the code, the GCP
Cloud Build Workflow runs and deploys a new version of the Switch back to TablePlus to see all the tables there. Figure
app. This means that your app on the GAE is now up to date, 11 shows all the tables running a fresh Laravel migration.
with the editors’ view up and running.
Before you test your view on GAE, let’s run the Laravel mi-
grations on the cloud database. While the Cloud SQL Auth
Proxy is running, switch to the app source code and apply
the following changes.
Locate and open the .env file and update the database sec-
tion as follows:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=db_1
DB_USERNAME=root
DB_PASSWORD=
codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 43
You have successfully prepared the database and are ready Now that you’ve configured all APIs and libraries, let’s connect
to accept new connections. the Laravel app that’s running on GAE to the cloud database.
Step 5: Enable GCP APIs and Permissions Step 6: Configure Laravel App on GAE to Use
Before you can connect to the cloud database from within the Cloud Database
the GAE, you need to enable a few libraries and add some It’s time to configure your app running inside GAE to con-
permissions. nect to this cloud database. Start by locating and opening
the \app.yaml file at the root folder of the app.
Locate the following APIs and enable them in this order:
Open the file and append the following settings under the
• Cloud SQL Admin API env_variables section:
• Google App Engine Flexible Environment
DB_DATABASE: db_1
DB_USERNAME: root
DB_PASSWORD:
In Google Cloud, before using DB_SOCKET: '/cloudsql/INSTANCE_CONNECTION_NAME'
a service, make sure to enable
Replace the INSTANCE_CONNECTION_NAME with the real con-
its related APIs and Libraries. nection name. Then, add a new section to the \app.yaml file:
beta_settings:
cloud_sql_instances: 'INSTANCE_CONNECTION_NAME'
To enable any API or Library on the GCP, on the left-side menu,
click the APIs and Services menu item. Then, once you’re on This enables GAE to establish a Unix domain socket with the
the APIs and Service page, click the Library menu item. Search cloud database. You can read more about connecting to the
for any API and enable it by clicking the ENABLE button. Cloud database from GAE here https://cloud.google.com/
sql/docs/mysql/connect-app-engine-flexible#php.
In addition to enabling the two APIs, you need to add the
role of Cloud SQL Client for the GAE Service Account under Commit and push your changes on GitHub. This triggers
IAM and Admin section. the Google Cloud Build workflow to run and deploy a new
version of the app. Wait until the GCP deploys your app,
On the left-side menu, locate and click the IAM and Admin then access the editors’ view by following this URL: https://
menu item. Click the pencil icon to edit the account that prismatic-grail-323920.appspot.com/editors.
ends with @appspot.gserviceaccount.com and that has
the name of App Engine default service account. Figure 12 This URL opens the coding editors’ view. Start adding a few
shows how to add the Cloud SQL Client role. coding editors to try out the database connection and make
Figure 12: Adding the Cloud SQL Client role on the App Engine default user
44 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Figure 13: Editors’ view up and running on GCP
sure all is working smoothly. Figure 13 shows the Editors’ Listing 5: SetupController __invoke() method
view up and running on GCP.
public function __invoke(Request $request):
\Illuminate\Http\Response
You have successfully connected your Laravel app to the {
Google Cloud SQL database. Let’s move on and enhance the try {
Google Cloud Build workflow.
Log::debug('Starting: Run database migration');
Step 1: Add a Controller Endpoint to Run the Migrations An invokable controller is a single action controller that
Let’s start by adding a new invokable controller in Laravel contains an __invoke method to perform a single task.
by running the following command: Listing 5 shows the __invoke() function implementation.
codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 45
The function is simple. It calls the migrate command using The build step uses a container of the gcr.io/cloud-build-
the Artisan::call() function call. It also does some logging ers/gcloud Docker image to run a curl command on a new
to trace whether this task runs or fails. bash shell. To learn more about Google Cloud Build Steps,
check this resource https://cloud.google.com/build/docs/
The next step is to add a new route inside the file \routes\ build-config-file-schema.
web.php as follows:
The build step issues a POST curl request to a URL repre-
Route::post( sented by $_APP_BASE_URL. The Google Cloud Build sub-
'/setup/IXaOonJ3B7', stitutes this variable with an actual value when it runs
'\App\Http\Controllers\SetupController' the trigger. The value of this variable shall be the full app
); /setup/ URL. You can learn more about Google Cloud Build
substitution here: https://cloud.google.com/build/docs/
I’m adding a random string suffix to the /setup/ URL, try- configuring-builds/substitute-variable-values.
ing to make it difficult to guess this route path. One final
step is to locate and open the \app\Http\Middleware\ Step 3: Amend the Cloud Build Trigger to Pass Over
VerifyCsrfToken.php file. Then, make sure to enlist the the _APP_BASE_URL
/setup/ URL inside the $except array as follows: Visit the list of triggers under Google Cloud Build. Locate
the deploy-main-branch trigger and click to edit. Figure 14
protected $except = [ shows how to edit a Google Cloud Build trigger.
'setup/IXaOonJ3B7'
]; Once on the Edit Trigger page, scroll to the bottom, locate,
and click the ADD VARIABLE button. This prompts you to
This way, Laravel won’t do a CSRF token verification enter a variable name and value. Variable names should
(https://laravel.com/docs/8.x/csrf) when the /setup/ URL start with an underscore. To use this same variable inside
GCP requests it. the Google Cloud Build workflow, you need to prefix it with
a $ sign. At run time, when the trigger runs, GCP substi-
Step 2: Amend Google Cloud Build to Run Migrations tutes the variable inside the Build workflow with the value
Switch to \ci\cloudbuild.yaml and append a new Cloud Build you’ve assigned on the trigger definition. Figure 15 shows
step to invoke the /setup/ URL from within the Build workflow. the _APP_BASE_URL variable together with its value.
Listing 6 shows the build step to invoke the /setup/ URL.
Save the trigger and you’re ready to go!!
Listing 6: Invoke /setup/ inside Google Build file Step 4: Run and Test the Trigger
- name: 'gcr.io/cloud-builders/gcloud' Before running the trigger, let’s make a new migration, for
entrypoint: "bash" example, to add a new description column on the editors
args: table.
- "-c"
- |
RESPONSE=$(curl -o /dev/null -s -w "%{http_code}" \ Run the following command to generate a new migration
-d "" -X POST $_APP_BASE_URL) file:
if [ "200" != "$$RESPONSE" ];
then
echo "FAIL: migrations failed" sail artisan make:migration \
exit 1; add_description_column_on_editors_table
else
echo "PASS: migrations ran successfully"
fi Listing 7 shows the entire migration file.
46 Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 codemag.com
Listing 7: Database migration file
class AddColumnDescriptionOnEditorsTable extends Migration
{ /**
/** * Reverse the migrations.
* Run the migrations. *
* * @return void
* @return void */
*/ public function down()
public function up() {
{ Schema::table('editors', function($table) {
Schema::table('editors', function($table) { $table->dropColumn('description');
$table->string('description', 255)->nullable(); });
}); }
} }
The migration, when run, adds a new column on the editors Next time I see you, I’ll strengthen running the Laravel
table inside the database. migration by implementing an authentication layer with
Google Secret Manager.
Save your work by adding all the new changes to Git and
committing them to GitHub. This, in turn, triggers GCP to In addition, deployments can go wrong sometimes and so
run the associated Google Cloud Build workflow. you should have a backup plan. As you will see in the next
article, you can take a backup snapshot of the Cloud data-
Eventually, a new version of the app will be deployed on base every time you run the Build workflow. If anything goes
GCP. However, this time, the trigger will also POST to the / wrong, you can revert to an old backup of the database.
setup/ URL to run the Laravel migrations as part of running
the Build workflow. And much more… See you then!
You can check the database to make sure the Build workflow Bilal Haidar
runs the Laravel migration and accordingly adds a new col-
umn to the editors table.
Conclusion
This article was a continuation of the previous one by con-
necting your Laravel app to a local database. The next step
is to create a Google Cloud database and connect the app
to it when running inside GAE. Finally, you enhanced the
Google Cloud Build workflow to run the Laravel migrations
as part of the Build workflow itself.
codemag.com Beginner’s Guide to Deploying PHP Laravel on the Google Cloud Platform: Part 2 47
ONLINE QUICK ID 2201061
This article provides a deep dive on how to work with Apache • It can persist data for a particular period.
Kafka in ASP.NET 6 Core. • It has the ability to grow elastically with zero downtime.
• It offers support for replication, partitioning, and
If you’re to work with the code examples discussed in this ar- fault-tolerance.
ticle, you should have the following installed in your system:
Introduction to Apache Kafka Low latency. Latency refers to the amount of time required
Streaming data refers to data constantly produced by hundreds to process each message. Apache Kafka can provide high
of data sources, which often transmits the data records concur- throughput with low latency and high availability.
rently. A streaming platform must manage this continual influx
of data while still processing it sequentially and progressively. Durability. Kafka messages are highly durable because Kafka
stores the messages on the disk, as opposed to in memory.
Kafka is a publish/subscribe messaging platform with built-in
support for replication, partitioning, fault tolerance, and bet-
ter throughput. It’s an excellent choice for applications that
Kafka vs. Traditional Messaging Systems
need large scale data processing. Kafka is mainly used to build Kafka differs from traditional messaging queues in several
real-time streaming data pipelines. Kafka incorporates fault- ways. Kafka retains a message after it has been consumed.
tolerant storage and stream processing capabilities to allow for Quite the opposite, competitor RabbitMQ deletes messages
the storage and analysis of historical and real-time data. immediately after they’ve been consumed.
Here’s the list of Apache Kafka features: RabbitMQ pushes messages to consumers and Kafka fetches
messages using pulling.
• It can publish and subscribe streams of data.
• It’s capable of handling a vast number of read/write Kafka can be scaled horizontally and traditional messaging
operations per second. queues can scale vertically.
Components of the
Apache Kafka Architecture
The Apache Kafka architecture is comprised
of the following components:
• Kafka Broker: A Kafka broker acts as a middleman be- Setting Up Apache Kafka
tween producers and consumers, hosting topics and First off, download the Apache Kafka setup file from the
partitions and enabling sending and receiving messag- location mentioned earlier. Now switch to the Downloads
es between them. The brokers in a typical production folder in your computer and install the downloaded files
Kafka cluster can handle many reads/writes per second. one by one. Kafka is available as a zip file, so you must
Producers and consumers don’t communicate directly. extract the archive to the folder of your choice. Assuming
Instead, they communicate using these brokers. Thus, if you’ve already downloaded and installed 7-zip and Java in
one of the producers or consumers goes down, the com- your computer, you can proceed with setting up and running
munications pipeline continues to function as usual. Apache Kafka.
Figure 1 illustrates a high-level Kafka architecture. A Kafka Now follow the steps outlined below:
cluster comprises one or more Kafka brokers. Although the
Producers push messages into the Kafka topics in a Kafka 1. Switch to the Kafka config directory in your computer.
broker, the Consumers pulls those messages off a Kafka It is D:\kafka\config in my computer.
topic. 2. Open the file server.properties.
3. Find and replace the line “log.dirs=/tmp/kafka-logs”
with “log.dirs=D:/kafka/kafka-logs”, as shown in Fig-
When Not to Use Kafka ure 2.
Despite being the most popular messaging platform and 4. Save and close the server.properties file.
having several advantages, you should not use Kafka in any 5. Now open the file zookeeper.properties.
of the following use cases: 6. Find and replace the line “dataDir=/tmp/zookeeper” with
“dataDir=D:/kafka/zookeeper-data”, as shown in Figure 3.
• Kafka’s not a good choice if you need your messages 7. Save and close the file
processed in a particular order. To process messages
in a specific order; you should have one consumer By default, Kafka runs on the default port 9092 in your
and one partition. Instead, in Kafka, you have mul- computer and connects to ZooKeeper at the default port
tiple consumers and partitions and so it isn’t an ideal 2181.
choice in this use case.
• Kafka isn’t a good choice if you only need to process a Switch to your Kafka installation directory and start Zoo-
few messages per day (maybe up to several thousand). keeper using the following command:
Instead, you can take advantage of traditional mes-
saging queues like RabbitMQ. .\bin\windows\zookeeper-server-start.bat
• Kafka is an overkill for ETL jobs when real-time pro- config\zookeeper.properties
cessing is required because it isn’t easy to perform
data transformations dynamically. Figure 4 is how it looks when Zookeeper is up and running
• Kafka is also not a good choice when you need a sim- in your system:
ple task queue. Instead, it would be best if you lever-
aged RabbitMQ here. Launch another command window and write the following
command in there to start Kafka:
Kafka isn’t a replacement for a database, and it should
never be used for long-term storage. Because Kafka stores .\bin\windows\
redundant copies of data, it might be a costly affair as well. kafka-server-start.bat
When you need data to be persisted in a database for query- config\server.properties
ing, insertion, and retrieval, you should use a relational da-
tabase like Oracle, SQL Server, or a non-relational database When Kafka is up and running in your system, it looks like
like MongoDB. Figure 5.
Create Topic(s) You can list all topics in a cluster using the following com-
Now that Zookeeper and Kafka are both up and running, mand:
you should create one or more topics. To do this, follow the
steps outlined below: .\bin\windows\kafka-topics.bat --list
--zookeeper localhost:2181
Launch a new command prompt window. Type the following
command in there and press enter:
Working with Apache Kafka
kafka-topics.bat --create --zookeeper in ASP.NET Core 6
localhost:2181 --replication-factor 1 In this section, you’ll implement a simple Order Processing
--partitions 1 --topic test application. You’ll build two applications: the producer ap-
plication and the consumer application. Both of these ap- 3. Specify the project name as ApacheKafkaProducerDemo
plications will be created using ASP.NET 6 in Visual Studio and the path where it should be created in the “Config-
2022 IDE. ure your new project” window.
4. If you want the solution file and project to be cre-
Create a New ASP.NET 6 Project in Visual Studio 2022 ated in the same directory, you can optionally check
Let’s start building the producer application first. You can the “Place solution and project in the same directory”
create a project in Visual Studio 2022 in several ways. When checkbox. Click Next to move on.
you launch Visual Studio 2022, you’ll see the Start window. 5. In the next screen, specify the target framework and
You can choose “Continue without code” to launch the main authentication type as well. Ensure that the “Configure
screen of the Visual Studio 2022 IDE. for HTTPS,” “Enable Docker Support,” and the “Enable
OpenAPI support” checkboxes are unchecked because
To create a new ASP.NET 6 Project in Visual Studio 2022: you won’t use any of these in this example.
6. Click Create to complete the process.
1. Start the Visual Studio 2022 Preview IDE.
2. In the “Create a new project” window, select “ASP.NET Follow the same steps outlined above to create another ASP.
Core Web API” and click Next to move on. NET Core 6 Web API project. Name this project ApacheKafka-
ConsumerDemo. Note that you can also choose any mean- Listing 1: Create a new API Controller
ingful name for both these projects. using Confluent.Kafka;
using Microsoft.AspNetCore.Mvc;
You now have two ASP.NET Core 6 Web API projects: Apache- using System;
KafkaProducerDemo and the ApacheKafkaConsumerDemo. using System.Net;
using System.Text.Json;
using System.Threading.Tasks;
Install NuGet Package(s) using System.Diagnostics;
So far so good. The next step is to install the necessary
NuGet Package(s). To produce and consume messages, you namespace ApacheKafkaProducerDemo.Controllers {
need a client for Kafka. Use the most popular client: Conflu- [Route("api/[controller]")]
[ApiController]
ent’s Kafka .NET Client. To install the required packages into public class ProducerController: ControllerBase {
your project, right-click on the solution and select “Manage private readonly string
NuGet Packages for Solution...”. Then type Confluent.Kafka bootstrapServers = "localhost:9092";
in the search box, select the Confluent.Kafka package, and private readonly string topic = "test";
install it. You can see the appropriate screen in Figure 7. [HttpPost]
public async Task <IActionResult>
Alternatively, you can execute the following command in Post([FromBody] OrderRequest orderRequest) {
the Package Manager Console: string message = JsonSerializer.Serialize
(orderRequest);
return Ok(await SendOrderRequest(topic, message));
PM> Install-Package Confluent.Kafka }
private async Task < bool > SendOrderRequest
Now you’ll create the classes and interfaces for the two ap- (string topic, string message) {
ProducerConfig config = new ProducerConfig {
plications. BootstrapServers = bootstrapServers,
ClientId = Dns.GetHostName()
Building the ApacheKafkaProducerDemo Application };
Create a class named OrderRequest in a file named Order-
try {
Request.cs with the following code in there: using(var producer = new ProducerBuilder
<Null, string> (config).Build()) {
namespace ApacheKafkaProducerDemo var result = await producer.ProduceAsync
{ (topic, new Message <Null, string> {
public class OrderRequest Value = message
});
{
public int OrderId { get; set; } Debug.WriteLine($"Delivery Timestamp:
public int ProductId { get; set; } {result.Timestamp.UtcDateTime}");
public int CustomerId { get; set; } return await Task.FromResult(true);
}
public int Quantity { get; set; } } catch (Exception ex) {
public string Status { get; set; } Console.WriteLine($"Error occured: {ex.Message}");
} }
}
return await Task.FromResult(false);
}
Create a new controller named ProducerController in the }
ApacheKafkaProducerDemo application with the code found }
in Listing 1 in it.
Figure 9: Displaying the Order ID of the order being processed in the Output window
Figure 9 illustrates that the breakpoint has been hit in the --bootstrap-server
producer application. Note the Timestamp value displayed localhost:9092 --topic
in the Output window. test --from-beginning
When you press F5, the breakpoint set in the consumer ap-
plication will be hit and you can see the message displayed Where Should I Go from Here
in the Output window, as shown in Figure 10. Kafka is a natural choice if you’re willing to build an ap-
plication that needs high performant, resilient, and scal-
able messaging. This post walked you through building a
Kafka CLI Administration simple Kafka producer and consumer using ASP.NET 6. Note
In this section we’ll examine how we can perform a few ad- that you can set up Apache Kafka using Docker as well. You
ministration tasks in Kafka. can know more about Apache Kafka from the Apache Kafka
Documentation.
Shut Down Zookeeper and Kafka
To shut down Zookeeper, use the zookeeper-server-stop.bat Joydip Kanjilal
script, as shown below:
bin\windows\zookeeper-server-stop.bat
bin\windows\kafka-server-stop.bat
.\bin\windows\
kafka-console-consumer.bat
CSV files are. Comma Separated Files (CSV) are text files that con- BoxOfficeGross = 123456 });
tain multiple records (rows), each with one or more elements (col-
umns) separated by a comma character. Actually, the elements movies.Add(new Movie () { Name =
can be separated by any type of delimiter, not only a comma. For "Empire Strikes Back",
instance, another common file format is the Tab Separated Value Director = "Irving Kirshner",
(TSV) file where every element is separated by a tab character. DateReleased = new DateTime(1977, 5, 23),
BoxOfficeGross = 123456 });
using (var writer = The output of this set of code can be found in Figure 2.
new StreamWriter(outputFile))
using (var csv =
new CsvWriter(writer, config)) Configuring Writer Options
{ As stated earlier, the CSVWriter accepts a configuration ob-
csv.WriteRecords(dataToWrite); ject that’s used to control output options. A few of the key
} options will be covered next.
}
Header Column
You may or may not want to include a header file in your
CSV files. By default, CSVHelper adds the name of your class’
properties in a header row. You can turn off the header by
setting it with the following code:
config.HasHeaderRecord = false;
Changing Delimiters
Figure 2: Movie Data Output as CSV file. One of the more common options is the delimiter used between
each data element. By default, CSVHelper delimits data comma
characters. The following three examples show how you can
change the delimiter to the PIPE, TAB, and a “crazy” delimiter.
config.Delimiter = "|";
config.Delimiter = "\t";
config.Delimiter = "[[YES_IM_A_DELIMETER]]";
Figure 5: The CSV file with TAB delimiter Figure 7 shows the CSV with quoted content.
//include header Figure 6: The CSV file with the CRAZY delimiter
config.HasHeaderRecord = false;
//change delimiter
config.Delimiter = "|";
//quote delimit
config.ShouldQuote = args => true;
When you examine this set of code for reading files, take
notice of the following items:
Rod Paddock
Simple code is always better,
You can also use class maps to change the order of how CSV
elements are read from your CSV file and are applied to the
returned object’s properties. The following class map reads
content from the CSV created earlier in this article. Notice
the column order.
The code used to attach a class map is exactly like the writer.
You simply create an instance of the class map and apply it
to the CSVReader’s Context property:
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
namespace Juris.Api
{
public class Program Figure 1: History of APIs
{
public static void Main(string[] args)
{ Routing
CreateHostBuilder(args).Build().Run(); The first thing you might notice is that the pattern for
} mapping API calls looks a lot like MVC Controllers’ pattern
matching. This means that Minimal APIs look a lot like con-
public static IHostBuilder troller methods. For example:
CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args) app.MapGet("/api/clients", () => new Client()
.ConfigureWebHostDefaults(webBuilder => {
{ Id = 1,
webBuilder.UseStartup<Startup>(); Name = "Client 1"
}); });
}
} app.MapGet("/api/clients/{id:int}",
(int id) => new Client()
The need for a class and a void Main method that bootstraps {
the host to start the server is how we’ve been writing ASP.NET Id = id,
in the .NET Core way for a few years now. With top-level state- Name = "Client " + id
ments, they want to streamline this boilerplate, as seen below: });
var bldr = WebApplication.CreateBuilder(args); For other verbs, you need to handle mapping of other verbs
using MapMethods:
// Register Services
bldr.Services.AddDbContext<JurisContext>();
bldr.Services.AddTransient<IJurisRepository, app.MapMethods("/clients", new [] { "PATCH" },
JurisRepository>(); async (IJurisRepository repo) => {
return await repo.GetClientsAsync();
var app = bldr.Build(); });
Here you can see that you can use the Services object on Notice that the MapMethods method takes a path, but also
the application builder to add any services you need (in this takes a list of verbs to accept. In this case, I’m executing
case, I’m adding an Entity Framework Core context object this lambda expression when a PATCH verb is received. Al-
and a repository that I’ll use to execute queries. To use though you’re creating APIs separately, most of the same
these services, you can simply add them to the lambda ex- code that you’re familiar with will continue to work. The
pression parameters to use them: only real change is how the plumbing finds your code.
Past hits include three issues on Dynamic Lambdas, Prom- the synchronous programming of the past. Microsoft made
ises in JavaScript, NuGet, and SignalR. I still get pinged on this task much easier when it introduced some “syntactic
the Dynamic Lambda work—an oldie, but a very much rel- sugar” to make things easier. In other words, the .NET com-
evant goodie! piler undertakes the heavy lifting to transform your C# or
VB.NET into the necessary IL (Intermediate Language) code
I was a bit dumbfounded to realize that I never did an STP to support async calls.
on Tasks! Better late than never! The STP series had and has
one goal: to get you up and running on a .NET feature as
quickly as possible so you can take it to the next level for
John V. Petersen your context and use-cases. As .NET evolves and its history Sometimes, what’s old is new!
[email protected] grows, as well as the numerous blog posts with opinions—
linkedin.com/in/johnvpetersen some good, some not, some dogmatic, some neutral—more
than ever, it has become important to cut through the
Based near Philadelphia,
noise. In most cases, the official Microsoft Documentation An async call, as the name implies, means that the code
Pennsylvania, John is an
is the best source of raw, unopinionated information. making the call does not wait for the result. And yet, the
attorney, information
associated keyword to async is await, which can be con-
technology developer,
consultant, and author. Let’s get to it! fusing because it implies that await means that the code
waits, which contradicts async programming! Instead, what
it means is that the calling code continues its work while it
What Is a Task? waits for (or awaits) the async code to complete.
Think of a Task as a unit of work that is to be executed
sometime in the future. I’ve added the emphasis on future Before async/await, you needed to implement callbacks,
because that’s what a Task is: a future. An oft-asked ques- which meant that you had to pass a reference of the func-
tion in the various online forums (Stack Overflow, etc.) is tion that was to receive the async function’s result. Async
whether a Task is a promise in the same way promises are calls often took a bit to understand if the only world you
implemented in JavaScript. Promises are supported in .NET were familiar with was synchronous calls where the code
through the TaskCompletionSource Class, which is beyond would wait, and wait, and continue to wait until either the
the scope of this article. call completed or timed-out. This often made for a very un-
pleasant user experience because the client, typically a user
In the meantime, consider a Task as a future unit of work interface, wouldn’t update. The UI and the whole app ap-
that may exist on its own or in the context of TaskComple- peared to be frozen, and then after some time, everything
tionSource. When in the context of TaskCompletionSource, magically came back to life!
a Task participates in fulfilling a promise. A promise doesn’t
guarantee that the operation is successful. What’s guaran- Async/await allows for calls to be non-blocking, meaning
teed is that a result of some kind will be returned. In most that the current thread isn’t blocked while the task is run-
cases, it’s sufficient to implement tasks as independent en- ning in another thread. Hence, as stated previously, the
tities, apart from TaskCompletionSource. A good use case current thread continues its work while it waits. The async/
for TaskCompletionSource is in the API context where some await syntax alleviates you of the direct burden of estab-
kind of result, even if it’s an error, must be returned before lishing callbacks. Async programming presents you with an
yielding control back to the caller. In that regard, TaskCom- opportunity for better performing applications because it
pletionSource is a promise in the same way promises are results in more efficient use of resources. Underneath the
implemented in JavaScript. covers, .NET manages the spawning of a new thread in which
to carry out the task, making sure that result gets back to
Tasks have been a .NET feature since version 4.0, which was your calling code.
released in 2010. Despite being an available feature for over
a decade, it remains a somewhat under-utilized feature, giv- The following document list contains resources that you’ll
ing credence to the notion that sometimes, what’s old is want to review. (Note: Be sure to view the docs relevant
new! The same is true with the async/await language fea- to the .NET Framework version that you and your team are
ture introduced in C# 5 in 2012. using!)
); microsoft.com/en-us/dotnet/api/system.threading.tasks.
Assert.True(result >= 1); taskcreationoptions?view=net-5.0.
}
Cancellation Tokens
The previous example assumes a short running, non-CPU- In the previous example, take note of the cancellation token
intensive process. The Run method is a short-hand method passed as a parameter. This is one of async programming’s
for defining and launching a Task in one operation. The Run major benefits, to cancel a long running Task. If the user
method causes the task to run on a thread allocated from elects to cancel the task while it’s running, that can be
the default thread pool. accomplished because the current thread is not blocked,
allowing user interaction. A great tutorial on cancelling
What if it’s a long running process? In that case, you’d long-running tasks may be found in Brian Languas’s You-
want to use Task.Factory.StartNew(), which provides more Tube Video: https://youtu.be/TKc5A3exKBQ.
granular control over how the Task gets created by provid-
ing access to more parameters. Figure 1 illustrates the third
StartNew method signature with the additional parameters Conclusion
controlling how the Task is created. Tasks and async programming are powerful tools to add to
your development toolbox and this article has only scratched
the surface. More complex use cases include wrapping mul-
tiple tasks together and waiting for all to complete (each
Task.Run is a simpler, running in their own thread .NET’s Task and async capabili-
shorthand way to create and ties are a rich and interesting environment! If you want to
see more of this type of content, drop a line to me at CODE
start a task in one operation. Magazine and I’ll keep pumping them out. With .NET 5 and
VS Code, there’s much to distill and demystify.
John V. Petersen
Task.Run on the other hand, is a simpler, shorthand way to
create and start a task in one operation.
• Run(): https://docs.microsoft.com/en-us/dotnet/api/
system.threading.tasks.task.run?view=net-5.0
• StartNew(): https://docs.microsoft.com/en-us/dotnet/api/
system.threading.tasks.taskfactory.startnew?view=net-5.0
balancing, and the “pay-as-you-go” computing model. Ku- use and configure Knative Eventing for different use-cases
bernetes, on the other hand, provides a set of primitives to in a future article.
run resilient distributed applications using modern contain-
er technology. It takes care of autoscaling and automatic
failover for your application and it provides deployment Development Set Up
patterns and APIs that allow you to automate resource man- In order to install Knative and deploy your application,
agement and provision new workloads. Using Kubernetes you’ll need a Kubernetes cluster and the following tools
requires some infrastructure management overhead and it installed:
may seem like a conflict putting serverless and Kubernetes
in the same box. • Docker Peter Mbanugo
• kubectl, the Kubernetes command-line tool [email protected]
Hear me out. I come at this with a different perspective that • kn CLI, the CLI for managing Knative application and www.pmbanugo.me
may not be evident at the moment. configuration twitter.com/p_mbanugo
Peter Mbanugo is a writer
You could be in a situation where you’re only allowed to Installing Docker and software developer who
run applications within a private data center, or you may be To install Docker, go to the URL https://docs.docker.com/ codes in JavaScript and C#.
using Kubernetes but you’d like to harness the benefits of get-docker and download the appropriate binary for your OS. He is the author of “How
serverless. There are different open-source platforms, such to build a serverless app
as Knative and OpenFaaS, that use Kubernetes to abstract Installing kubectl platform on Kubernetes”.
the infrastructure from the developer, allowing you to de- The Kubernetes command-line tool kubectl allows you to run He has experience working
ploy and manage your applications using serverless archi- commands against Kubernetes clusters. Docker Desktop in- on the Microsoft stack of
tecture and patterns. stalls kubectl for you, so if you followed the previous section technologies and also build-
on installing Docker Desktop, you should already have kubectl ing full-stack applications in
This article will show you how to run serverless functions installed and you can skip this step. If you don’t have kubectl JavaScript. He’s a co-chair
using Knative and Kubernetes. installed, follow the instructions below to install it. on NodeJS Nigeria, a Twilio
Champion, and a contributor
If you’re on Linux or macOS, you can install kubectl using to the Knative open-source
Introduction to Knative Homebrew by running the command brew install kubectl. project.
Knative is a set of Kubernetes components that provides server- Ensure that the version you installed is up to date by run-
He’s the maker of Hamoni
less capabilities. It provides an event-driven platform that can ning the command kubectl version --client.
Sync, a real-time state
be used to deploy and run applications and services that can
synchronization as a service
auto-scale based on demand, with out-of-the-box support for If you’re on Windows, run the command curl -LO https:// platform. He works with
monitoring, automatic renewal of TLS certificates, and more. dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl. foobar GmbH as a Senior
exe to install kubectl, and then add the binary to your PATH. Software Consultant.
Knative is used by a lot of companies. In fact, it powers Ensure that the version you installed is up to date by running
the Google Cloud Run platform, IBM Cloud Code Engine, and the command kubectl version --client. You should have ver- When he isn’t coding, he
Scaleway serverless functions. sion 1.20.x or v1.21.x because in a future section, you’re go- enjoys writing the technical
ing to create a server cluster with Kubernetes version 1.21.x. articles that you can find on
The basic deployment unit for Knative is a container that his website or other publica-
can receive incoming traffic. You give it a container image Installing kn CLI tions, such as on Pluralsight
to run and Knative handles every other component needed kn CLI provides a quick and easy interface for creating Kna- and Telerik.
to run and scale the application. The deployment and man- tive resources, such as services and event sources, without
agement of the containerized app is handled by one of the the need to create or modify YAML files directly. kn also sim-
core components of Knative, called Knative Serving. Knative plifies completion of otherwise complex procedures, such as
Serving is the component in Knative that manages the de- autoscaling and traffic splitting.
ployment and rollout of stateless services, plus its network-
ing and autoscaling requirements. To install kn on macOS or Linux, run the command brew
install kn.
The other core component of Knative is called Knative Event-
ing. This component provides an abstract way to consume To install kn on Windows, download and install a stable bi-
Cloud Events from internal and external sources without nary from https://mirror.openshift.com/pub/openshift-v4/
writing extra code for different event sources. This article clients/serverless/latest. Afterward, add the binary to the
focuses on Knative Serving, but you’ll learn about how to system PATH.
curl –sL \
https://raw.githubusercontent.com/csantanapr\
/knative-kind/master/01-kind.sh | sh
Wait for a few minutes for your cluster to be ready. When it’s
done, you should have a single-node cluster with the name
serverless-function, in Frankfurt. The size of the node is a
computer with two vCPUs, and 4GB RAM. Also, the command
you just executed sets the current kubectl context to that of
the new cluster.
Figure 2: kube context
You can modify the values passed to the doctl kubernetes
cluster create command. The --region flag indicates the
Creating a Kubernetes Cluster cluster region. Run the command doctl kubernetes options
You need a Kubernetes cluster to run Knative. You can use a regions to see possible values that can be used. The computer
local cluster using Docker Desktop or kind. size to use when creating nodes is specified using the --size
flag. Run the command doctl kubernetes options sizes for a
Create a Cluster with Docker Desktop list of possible values. The --count flag specifies the number
Docker Desktop includes a stand-alone Kubernetes server of nodes to create. For prototyping purposes, you created a
and client. This is a single-node cluster that runs within a single-node cluster with two vCPUs and 4GB RAM.
Docker container on your local system and should be used
only for local testing. Check that you can connect to your cluster by using kubectl
to see the nodes. Run the command kubectl get nodes. You
To enable Kubernetes support and install a standalone in- should see one node in the list, and the STATUS should be
stance of Kubernetes running as a Docker container, go to READY, as shown in Figure 3.
Preferences > Kubernetes and then click Enable Kuber-
netes.
Install Knative Serving
Click Apply & Restart to save the settings and then click Knative Serving manages service deployments, revisions,
Install to confirm, as shown in Figure 1. This instanti- networking, and scaling. The Knative Serving component
You can also run the function locally using the command
Figure 5: Function invocation response kn func run.
function. You can specify the buildpack to use, environment Other Useful Commands
variables, and options to tweak the autoscaling options for You’re now familiar with creating and deploying functions
the Knative Service. Open the file and update the envs field to Knative using the build and deploy commands. There are
with the value below: other useful commands that can come in handy when work-
ing with functions. You can see the list of commands avail-
- name: TARGET able using the command kn func --help.
value: Web
The deploy command builds and deploys the application. If
The index.js file contains the logic for the function. You can you want to build an image without deploying it, you can
add more files to the project, or install additional depen- use the build command (i.e., kn func build). You can pass
dencies, but your project must include an index.js file that it the flag -i <image> to specify the image, or -r <registry>
exports a single default function. Let’s explore the content to specify the registry information.
of this file.
The kn func info command can be used to get information
The index.js file exports the invoke(context) function that about a function (e.g., the URL). The command kn func list
takes in a single parameter named context. The context on the other hand, lists the details of all the functions you
object is an HTTP object containing the HTTP request data have deployed.
such as:
To remove a function, you use the kn func delete <name>
• httpVersion: The HTTP version command, replacing <name> with the name of the function
• method: The HTTP request method (only GET or POST to delete.
supported)
• query: The query parameters You can see the configured environment variables using the
• body: Contains the request body for a POST request command kn func config envs. If you want to add an environ-
• headers: The HTTP headers sent with the request ment variable, you can use the kn func config envs add com-
mand. It brings up an interactive prompt to add environment
The invoke function calls the handlePost function if it’s a variables to the function configuration. You can remove envi-
POST request, or the handleGet function when it’s a GET re- ronment variables as well using kn func config envs remove.
quest. The function can return void or any JavaScript type.
When a function returns void, and no error is thrown, the call-
er will receive a 204 No Content response. If you return some What’s Next?
value, the value gets serialized and returned to the caller. Serverless functions are pieces of code that take an HTTP
request object and provide a response. With serverless func-
Modify the handleGet function to return the value from the tions, your application is composed of modular functions
TARGET environment variable, the query paramteter, and that respond to events and can be scaled independently.
date. Open index.js and update the handleGet function In this article, you learned about Knative and how to run
with the function definition below: serverless functions on Kubernetes using Knative and the
func CLI. You can learn more about Knative on knative.dev,
function handleGet(context) { and a cheat sheet for the kn CLI is available on cheatsheet.
return { pmbanugo.me/knative-serving.
target: process.env.TARGET,
query: context.query, Peter Mbanugo
time: new Date().toJSON(),
but difficult to master. because it’s the business that owns the software Editorial Contributors
and will be the arbiter of whether the software Otto Dobretsberger
Jim Duffy
works and consequently, whether it delivers Jeff Etter
value. It’s incumbent on software developers to Mike Yeager
There’s also the change in wording from self- be the Rosetta Stone to make the translation. We
Writers In This Issue
managing to self-organizing. Is that a distinc- must become experts enough in the business to Bilal Haidar Joydip Kanjilal
tion without a difference? Is that plain, simple do that. The business isn’t going to be experts in Sahil Malik Peter Mbanugo
language that leads to better understanding? No. software development. Rod Paddock John V. Petersen
Paul D. Sheriff Shawn Wildermuth
That’s a subjective call. It’s one that each team Mike Yeager
and organization must answer for itself. In my experience, open, honest, and transparent
communications is a great place to start. At the Technical Reviewers
Markus Egger
This is also not a rant against scrum, whether beginning of every project, people want the same Rod Paddock
it’s the scrum.org camp or the scrumalliance.org thing. Agile may or may not be better than Water-
Production
camp. Although there are two different organi- fall in a given context; it all depends on your or- Friedl Raffeiner Grafik Studio
zations, and they both point to the same Scrum ganization. After all, context is a necessary factor www.frigraf.it
Guide (www.scrumguides.org) and the Agile Man- in assessing whether language is plain and simple. Graphic Layout
ifesto (www.agilemanifesto.org). How each camp We must always consider who the audience is. It’s Friedl Raffeiner Grafik Studio in collaboration
with onsight (www.onsightdesign.info)
describes scrum isn’t always consistent. Heck, an obvious point perhaps, but it’s one worth re-
I’ve been practicing Agile principles in one form peating. Employing plain, simple language where Printing
or another for over 20 years and I’m confused. individuals and interactions work and collaborate Fry Communications, Inc.
800 West Church Rd.
It’s for that reason that I’ve decided to use the to build, modify, and deliver working software that Mechanicsburg, PA 17055
Agile Manifesto as my sole basis to define Agile, yields value is, in my opinion, a more straightfor-
which bears repeating in its entirety here: ward way of articulating the Agile Manifesto’s true Advertising Sales
Tammy Ferguson
meaning and intent. As for the Agile Principles, 832-717-4445 ext 26
We are uncovering better ways of developing that’s up to you and your organization. There is, or [email protected]
software by doing it and helping others do it. at least there should be, a shared goal of getting
Circulation & Distribution
Through this work, we have come to value: things accomplished. Only you and your team can General Circulation: EPS Software Corp.
assess what those goals and objectives should be. Newsstand: American News Company (ANC)
• Individuals and interactions over processes Principles, absent context and what they are sup- Media Solutions
and tools posed to mean, despite what appears to be plain, Subscriptions
• Working software over comprehensive doc- simple language, are useless. Subscription Manager
umentation Colleen Cade
[email protected]
• Customer collaboration over contract nego- Every successful project has one thing in common:
tiation good people that had a shared goal. Yes, mistakes US subscriptions are US $29.99 for one year. Subscriptions
• Responding to change over following a plan along the way are made. We never get it right the outside the US are US $50.99. Payments should be made
in US dollars drawn on a US bank. American Express,
first or second time in most cases. For us techies, MasterCard, Visa, and Discover credit cards accepted.
That is, while there is value in the items on the let’s avoid tech talk. And if the business is lack- Bill me option is available only for US subscriptions.
right (processes and tools), we value the items ing clarity on something, ask for more clarity. If Back issues are available. For subscription information,
e-mail [email protected].
on the left (individuals and interactions) more. something isn’t feasible, explain why not in plain,
simple language that conveys the right concerns Subscribe online at
The Agile Manifesto nailed it as far as A): plain, sim- that the business can act upon. If we don’t do www.codemag.com
ple language, and B): what could be built from it. that, then how will we understand each other? CODE Developer Magazine
And if we can’t do that, how will we ever build 6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
Is Agile alone enough to build software? Of and deliver working software that delivers value? Phone: 832-717-4445
course not. Agile doesn’t exist in a vacuum. Soft-
ware development doesn’t exist in a vacuum ei- John V. Petersen
ther. Business doesn’t exist in a vacuum. If an
CODA:
On Plain, Simple Language
The Danish physicist and Nobel Laureate Niels Bohr believed that any concept, no matter how
complex, should be explainable in plain, simple language. Bohr was instrumental in aiding
understanding of Quantum Theory, a very complex subject indeed. He believed that once the heavy
lifting was completed as to any theory, the key to question of how we, the two sides, build “qual- Does this mean that developers aren’t or can’t be
advancement was in a collective understanding ity” software. business people? Are these mutually exclusive?
of what something generally is and its associated What does it mean to “work together?” What
benefits. Not everyone was equally an expert in Quality can be a bit of a subjective, loaded term. about understanding each other?
Quantum Theory. And there was only one Niels In this context, is the software easy to under-
Bohr. Although he could have a conversation stand? Does it work according to customer’s ex- How about Principle Three: Deliver working soft-
with others in his field on an equal footing, oth- pectations? That’s a practical, not a theoretical ware frequently, from a couple of weeks to a cou-
ers needed to have some basic understanding concern. If the software doesn’t work, all the ple of months, with a preference for the shorter
of what these complex and abstract ideas could theory in the world won’t matter. The art of soft- timescale.
practically accomplish. Institutions and govern- ware development, as I see it, is the proper mix
ments funded research, but sanction and funding of theory and pragmatism. There’s a lot of good A couple of weeks vs. a couple of months? The
were ultimately implemented by people; people theory in the DDD book and other resources, with term “couple” is more idiomatic English than a
who weren’t on the same intellectual plane as some of it being useful in a pragmatic sense. It’s precise ordinal measure. Is it two weeks?
Niels Bohr. always important to remember that the value of
theory is a function of its application in the real The principles declare a preference for “the short-
Simple does not mean dumbed-down. Simple world. er timescale.” Scrum, an agile-based framework,
means not complex. We can understand, in gen- arguably conflicts, or, at the very least, makes
eral terms, how a clock, a car, or a computer works One possible answer to address how to employ ambiguous, whether Principle Three is a bona-fide
without knowing how to build those things. Think plain, simple language is the Agile Manifesto and agile requirement because at the conclusion of
of an elevator pitch and what’s required for it to its 12 Principles (http://www.agilemanifesto. each sprint, the aim is to deliver “potentially re-
work. If an elevator ride lasts no more than 15 org/principles.html), in which the principles leasable software.” If you don’t release software,
seconds, in that time, to get to the next step, the themselves are based on the Agile Manifesto it isn’t delivered. And if it isn’t delivered, it isn’t
idea or concept must be expressed in language (http://www.agilemanifesto.org). The Agile delivering any value. Just ask any businessper-
that anybody can understand. Understanding Manifesto and its 12 Principles are abstract con- son who’s funding the project with a budget and
does not require expertise. If getting to the next cepts and are arguably written in plain, simple business objectives! The conflict here is that on
step required equal footing on the same exper- language. That’s a subjective call based on a one hand, scrum is based on Agile Principles. On
tise, nothing would ever be accomplished. We’re shared point of view. the other hand, arguably, scrum, via the Scrum
just trying to get to the next step, not solve every Guide (http://www.scrumguides.org) appears
problem immediately. The one thing I always found interesting about to throw this Agile principle out the window! In
the Agile principles is the relative position of other words, as plain and simple as the language
principle numbers One and Seven: appears, when the one thing (the collection of
What, then, qualifies as “plain, simple lan- principles) conflicts with the other thing (the
guage?” That very much depends on your point • Principle One: Our highest priority is to manifesto) it’s based upon, is the language really
of view and how that point of view meshes with satisfy the customer through early and con- plain and simple? To the contrary, the language
other points of view. Those points of view must tinuous delivery of valuable software. is ambiguous and becomes less and less useful.
be reconciled for it to seem simple to all parties. • Principle Seven: Working software is the In such a case, the gulf between the theory and
primary measure of progress. its practical usefulness increases. Theory, without
A specific area in software development where a any sort of practical application in the business
common language between developers and users Why aren’t these two principles adjacent to one context, is useless. In my opinion, Agile has drift-
is in Domain Driven Design (DDD), an approach another? Is that important? The only way soft- ed away from its roots in the manifesto, in part
developed by Eric Evans. In that work, he coined ware can deliver value to the business is if it’s because of a lack of the plain, simple language
the term Ubiquitous Language. Evans stated: A): delivered, and B): works per the business’s that leads to better understanding.
requirements. Reading the principles, the two
DDD is a very good book for outlining what the are disconnected. Their relative position to one I’ve always adhered to one basic principle: When
two sides of the software development coin, de- other is itself important context that aids under- in doubt, go back to basics. This isn’t a rant
velopers and users, should do. If it were only standing and thus goes to whether the language against Agile. But of late, there has been some
that easy, to just speak the same language, to is truly “plain, and simple!” This is where having revolt against Agile and its progeny—scrum, in
use common terms that reference the same com- a good editor matters. particular—citing that Agile just doesn’t work.
monly accepted thing. Bounded contexts and The Scrum Guide, where scrum is codified, has
well-defined domain models sound good. Those Then there’s Principle Four: Business people and been “tinkered” with over the years. Once upon
things, which are DDD elements, are themselves developers must work together daily throughout
the end-result of some other process. It begs the the project. (Continued on page 73)
TAKE
AN HOUR
ON US!
Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience in .NET,
.NET Core, web development, Azure, custom apps for iOS and Android and more, CODE Consulting can
get your software project back on track.
Contact us today for a free 1-hour consultation to see how we can help you succeed.
codemag.com/OneHourConsulting
832-717-4445 ext. 9 • [email protected]
shutters
tock/Lu
cky-pho
tograp
her
NEED
MORE OF THIS?
Is slow outdated software stealing way too much of your free time? We can help.
We specialize in updating legacy business applications to modern technologies.
CODE Consulting has top-tier developers available with in-depth experience in .NET,
web development, desktop development (WPF), Blazor, Azure, mobile apps, IoT and more.
Contact us today for a complimentary one hour tech consultation. No strings. No commitment. Just CODE.
codemag.com/modernize
832-717-4445 ext. 9 • [email protected]