Vagrant and Docker Love

| Comments

Not a full-on post, but more a note for myself to both investigate and test this… It would seem that Vagrant has now added provisioner support for Docker here. Be looking for a new post on this in the near future!

I’m working steadily to start off the new year, so my posting may be somewhat sporadic, but I will continue to blog Puppet fundamentals, supporting tools, and related items as I have the chance.

And in that vein, I encountered something this week I wanted to share with you.

It would seem that the new feature whereby you can include facter facts in a module for pluginsync to distribute them, but using the new mechanism of:

/modulename/facts.d/external_fact

is not 100% reliable when distributing those facts. By this, I mean the following observed behavior.

  1. You create your fact and place it in the above directory. Say, a shell script that gives you a value.
  2. You make your fact executable, and running it natively at the shell works perfectly.
  3. You do a puppet agent run, and the fact syncs to the agent machine, but never becomes available in the facter table.
  4. You find the sync’ed location of the fact (in the losgs from the sync) and run it manually, and it works perfectly.

I spoke with some folks at Puppet just in a conversation describing what I was seeing and they suggested the following workaround:

Make the fact a file resource and place the fact in /etc/puppetlabs/facter/facts.d. This makes the fact available to the facter system, and displays correctly in the facter table and responds to the facter -p as expected.

That’s it for now! Look for more to come on workflows and tools in the very near future.

Moving My Tech Content to GitHub

| Comments

Happy New Year, all!

Well, you may notice a bit of a change in format & layout. I got tired of fighting the foibles of WordPress. Every few weeks, WordPress decided quite on its own it no longer wished to display my blog for what appeared to be no reason at all.

As such, I’ve moved to Octopress, hosted it at GitHub, and am doing a permanent redirect at it from my site until I can work out both Web & Mail while having my MX stay where it is and having my Web address move to GitHub directly.

In the meantime, all tech posts have been duplicated here, and can be found by navigating the menus.

Note:

Over the next few weeks and months, you’ll see things move around, features being added and removed, themes and plugins changing and/or disappearing as I learn Octopress and figure out all its ins and outs. Please bear with my dist during this time.

PuppetConf 2014

| Comments

Glad to be at PuppetConf with #ShadowSoft exploring all the latest and greatest in PuppetLabs.

image

Southeast Puppet User’s Group September

| Comments

image

John Ray is bringing the Puppet + Docker goodness in his talk tonight: “Deploying Docker Containers with Puppet”.  Join us each month at the Shadow Soft offices for the latest in DEVOPS topics and information.  Always fun, lots of discussion and information surrounding Puppet topics and associated technologies.  There’s always pizza and beverages of all kinds, and we’ve finally moved into our new meeting/class rooms, so come on out.

The Toolbox Grows…

| Comments

So far we’ve gotten our heads around some important things.  First and foremost, vim.  Our editor and companion for creating great code and ways to see our code in action and be able to determine at a glance whether our syntax is correct.  Also, we’ve looked at revision control.  The single largest “CYA” ohmygodimgladivegotanoldercopytorestoreto sort of paradigm where you can roll yourself back to previously “known good” revisions to save that day…besides that, it’s just darned good practice to keep your code externally saved, revision controlled, and accessible.

I’ve also talked about importance of workflow clarity and quality.  If you implement a poor workflow, you just have an automated poor workflow. Key word here is “poor”.

Next up on our browse through the “toolbox” is “Vagrant”.  What is this Vagrant, you ask?

Virtualization is paramount in today’s world in a number of ways and for a number of reasons.  For extending your server farms to handle even more application expression, to expand your own desktop machines to test/try different operating systems, and even just rolling up an ad-hoc VM so you can try something without touching a “real” machine in your environment.

Some may disagree, but I’ve found virtualization to be one of the most powerful tools added to the toolbox in years.  Not only can you prototype systems or applications, but you can prototype entire environments.  This is where Vagrant shines, and especially in the context of Puppet (master + clients), allows you to create a fully functioning Puppet environment upon which to develop, prototype, and test without ever jeopardizing even the least important system of your infrastructure.  I count that as a “win”.  Let’s see what this tool can do.

What is Vagrant?

According to its website:

Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team.

To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware, AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can be used to automatically install and configure software on the machine.

There’s a lot there, but it’s just a fancy way of saying exactly what I said before.  Vagrant is essentially a framework system that wraps your virtualization engine to manage environments of VMs.  Here is where Vagrant will hold the power for us.

Virtualization

If Vagrant is the framework, then Virtualization is the foundation.  Now, I’ve chosen to use “Virtualbox” for my virtualization technology, but VMWare works every bit as well.  I am doing all my testing over Virtualbox, however, so YMMV.  Virtualbox is freely available from oracle, and you can download the appropriate version from Virtualbox at https://www.virtualbox.org.  I am running the latest version at 4.3.12 (as of this writing) and it serves the Vagrant system extremely well.

Vagrant

Next, you’ll need to install Vagrant on your system.  You can find all the right packages at http://www.vagrantup.com.  I am currently running version 1.6.3 without errors.

[warning]I want to make a disclaimer here since I’ve had an issue or two with Vagrant on a platform I don’t use-Windows.  I am a Mac & Linux user, and have had no issues using the Vagrant/Virtualbox combo on either of these.  However, literally every time I’ve used Vagrant over Windows, it’s just been a mess.  I’ve known one person (ONE!) who has gotten Vagrant to work over Windows, and it required his getting into the product, editing code, etc.  As such, I wouldn’t recommend it for those new to the platform.[/warning]

On the Mac platform, you get a .dmg file and can extract it run the installer.  Linux versions are available as RPM installs and Debian Packages.  Once you’re installed, let’s mess around a bit with Vagrant to see what we can do.

Getting Started

Vagrant is a unique tool in that it allows you to manage all these varied VMs, but adds a twist.  The big twist is that you don’t have to have the source materials for the VMs you’re installing.  In fact, the simplicity of turning up a new VM is astounding.  Take the following series of commands:

cd mkdir precise32 cd precise32 vagrant init hashicorp/precise32 vagrant up

If your Vagrant is installed correctly, a number of things start to happen.  First, Vagrant places a file in your cwd called “Vagrantfile”.  Your vagrant file (indie) looks like this:

Note that this is a long file with a lot of explanatory documentation.  In actuality, the most important part of your Vagrantfile can be summed up here:

These are the lines that are uncommented plus the top two declaratives that tell Vagrant what to do.  It’s a very simple file that does some very powerful things.  First, it checks your home directory in the ~/.vagrant.d location to see if you already have the “precise32” Vagrant source “box”.  (more on boxes later).  Next, if you do have this, it will simply start up a VM in your virtualization of choice with a randomized name.  For instance, mine is called “precise32_default_1402504453444_30545”.  Vagrant takes away the selection of an .iso image, connecting it to the virtual CD/DVD Rom, starting an installer, etc.  It simply sends you a pre-rolled image, places it in your .vagrant.d directory, and provisions the VM to respond to Vagrant commands, and starts it up within Virtualbox.  Precise32 is simply a test scenario, as Vagrant’s site has quite a number of varied and specially configured “box” files that you can use to prototype on at their “ready-made” box discovery site: https://vagrantcloud.com/discover/featured.  You can install boxes with too many variations and differentiations to enumerate here, and that’s not really the point for our purposes… you may find these of great assistance in your own workplace, but let’s continue.

When you run your “vagrant init” command listed above, it places a Vagrantfile, and when you do a “vagrant up”, it automatically retrieves your box file, provisions, and starts the VM.  Now, by simply running “vagrant ssh default”, you are now logged into this virtual machine!  You also have full sudo to become root and do any sort of damage you may wish to do.  If you logout (“exit” or CTRL-D), and type “vagrant destroy”, the VM goes away and you have nothing in Virtualbox.

Were we to just stop here, the power inherent in being able to just have these “Vagrantfiles” (sort of like a “Makefile” for boxes) to spin up and down test scenarios at will is incredible.  But, let’s look at this in light of the Vagrantfile, what it can do and how you can customize it.  There is an entire descriptive language surrounding Vagrant PLUS Vagrant has a plugin infrastructure whereby developers can extend Vagrant’s capabilities.  We will capitalize on these later.

So, imagine a scenario where you can create a directory, copy a text file into it, run a single command, and it automatically provisions a 4-node Puppet Enterprise infrastructure, fully installed with a master and three agents, MCollective fully installed, PuppetDB installed and in use…  literally a full installation just like you would use for your infrastructure…  Now we get powerful.  NOW we have the ability to do some cool things.

Next time, that’s exactly what we’re going to do.

“Do’s and Dont’s” for Your Puppet Environment

| Comments

IT Automation, like the features and functions offered by Puppet, is riddled with a number of pitfalls. Nothing dangerous or site-threatening in the near term, however evolving a bad plan can lead you down a painful path to re-trek when you ultimately need to demolish what you’ve done and re-tool, re-work, or even re-start from scratch.  Some simple suggestions can help smooth your integration, and also provide tools and methodologies that make changes in philosophy easy to test and implement as well as make the long road back from a disaster easy(-ier?) to navigate.

Here are some simple guidelines that can provide that foundation and framework:

DO Always Use Revision Control

It would seem this would be a foregone conclusion in this day and age, but you would be surprised just how many shops don’t have revision control of any kind in place.  A series of manifests or configurations might be tarred up and sent to the backup system, but aside from dated backups, there’s no real versioning…just monolithic archives to weed through in a time of disaster.

Revision control puts you one command away from restoring those configurations and manifests (and even your data vis-a-vis “Hiera”) to their original locations in the most recent state.

DO Rethink Your Environments

If you automate a bad workflow, you still have a bad workflow.  (albeit an automated one!)

Rethink how you do things and why.  Why do you promote code the way you do, and is there a better way to do it?  Why do you still have a manual portion to your procedure, and is it entirely necessary, or can this be remanded to Puppet to do for you as well.  What things are you doing well?  How can they improve?

Try to think through all your procedures.  There are more than you think, and they’re often less optimized than they can be.  If you’re going to implement Puppet automation, it’s time to retool.

DO Implement Slowly and Methodically

Another pitfall a lot of shops wander into is they try to do too much all at once, and do none of it well.  Either they implement too quickly and migrate a huge environment it took years to build (sometimes as much as a decade!) through a single professional services engagement or at an unrealistic pace.  Automation is complex, but if you take the time to implement correctly, piece-by-piece and hand-in-hand with your rethinking of your environment referred to above, you can revolutionize the way you work and make the environment considerably more powerful, considerably easier to work with, and ultimately release yourself to work on much more interesting problems in your environment.  Take your time to build the environment you want.

DO Engage the Community

By using Puppet, you are the beneficiary of the greatest software development paradigm in history – the Open Source movement.  People all over the world have taken part in crafting the powerful tool you have before you.  If you are able to help in like manner, by all means contribute your code to the community. (With your data in Hiera, this is easier than ever!)  Join a Puppet Users Group.  Share your clever solutions to unique problems with the community via GitHub, the Puppet Forge, your website… give back.  The more you pour in, the more you get out, and something you solve may end up baked into the final product one day in the future.

DON’T Pit Teams Against Each Other

DON’T make this a DEV vs OPS paradigm.  This is a marriage of the best tools of both worlds.  Depending on how your culture breaks down, this could be an OPS-aware way of doing development, or a DEV-informed way of doing operations.  You need to remember one thing in all of it.  The marriage of these worlds is a teamwork effort.

I was averse to the term DEVOPS when it first started being used, as it was a tool of the development world I was engaged with to cede root level access to developers.  In a properly managed, secure environment, this is always a no-no.  Development personnel are not trained systems people and rarely are.  By the same token, never ask your systems people to delve into core development, or to troubleshoot your developers’ code.  They are not tooled for that work.

This does not say that one is better than the other, nor does it say they do not share a certain amount of core skills at the basest levels. Much like the differences between civil and mechanical engineers, each has a base level of knowledge that ties them together, but each is highly specialized.  You don’t want your civil engineer building machine tools just as much as you don’t want your mechanical engineer building bridges.  Each discipline is highly specialized and carries with it nuance and knowledge you only gain through experience…experience on the job.

Instead, find a culture and a paradigm that joins the forces of these two disciplines to build something unique and special rather than wasting time with dissension and argument.

DON’T Expect Automation to Solve Everything

I know, that sounds like a sacrilege at this point, but its true.  No matter how automated your site becomes, how detailed your configuration elements are, or how much you’ve detailed your entire workflow, you still can never replace the element of human consideration and decision-making.

Automation, as I’ve said before, automates away the mundane to make time for you, DEVOPS person, to work on really interesting and curious work.  You can now write that entire new whiz bang gadget you’ve been conceptualizing for the last several years, but have never quite gotten there because you were too busy “putting out fires”.  Puppet automation is definitely a watershed in modern administration and development, but people are still needed.

Another “intangible” you may not readily think about when considering a DEVOPS infrastructure is one of culture.  The best places to work are always the best cultures brought about by the right collection of people, ideas, personalities, and management styles.  When you find that right mix of people and ideas, the workplace becomes a, forgive me, magical place to be.  Automation can never make that happen.

DON’T Starve Your Automation Environment

Automation solves a lot of things, but one thing it cannot do is feed itself.  This particular animal has a ton of needs over time.  From appropriate hardware to personnel, the environment needs time, attention, and consideration.  Remember that this is the “machine tool” of your whole company.  It is the thing that builds and maintains other things.  As such, its priority rises above that of the next web server or DNS system.

Always allocate enough resources (read: money, personnel, and time) to your environment.  If that means engineer time to work on a specially project and to do the job right, that’s what it means.  And, yes, it’s more important than meeting an arbitrarily assigned “live date” to your new widget or site or application.  The environment comes first, and all else follows.  If you give the resources and time to your automation initiatives it deserves, a number of years down the road you will look back and be amazed at the sheer amount of work your team was able to accomplish just by keeping this simple precept.

DON’T Stop Evolving

Never stop learning.  Never stop bettering yourself or your environment.  Always keep refactoring your code.  (i.e., if you wrote that Apache module 4 years ago, chances are good that what you’ve learned in the interim can go back into making that module even better.)  Always keep your people trained and engaged on the latest developments in Puppet and all the associated tools.  Never stop striving to be better and never stop reaching.  I may sound lil your coach from high school in this, but those principles he was trying to impart hold true.  If you continue to drive forward and reinvent yourself as a regular part of your forward pursuits, the endpoint of that evolution will benefit you personally, your team both vocationally and culturally, your company’s efficiency, and your environment’s impact on your bottom line.

Conclusion

If we keep a stronger eye on our environment and tools that rises above the simple concept of “that software I bought” and “fit it in between all the other things you have to do” and give Puppet its proper place in our company, it can truly revolutionize your workflow.  However, when properly placed culturally and from a design, implementation, and workflow perspective, it can transform any shop on levels not readily observable when looking at the price tag or the resource requirements list.  DO let Puppet transform your environment and workflow and DON’T be afraid to take the plunge.  It’s exciting, challenging, and can easily take your company to the “next level”.

GitHub, Git, and Just Plain Revision Control

| Comments

One of the “bugaboos” in the sysadmin world for the longest time was the reluctance to use those “stinky developer tools” in our world for any reason.  I’m not sure the impetus behind this, but my wager is on something akin to security or yet another open port or “attack vector” if you will.  But today’s competent and conscientious systems admin (not to mention DEVOPS person) will use revision control as their go-to standard for collecting, versioning, backing up, and distributing all manner of things.

I’ve seen some shops use CVS as their choice, old thought it is, just as a large “bucket” in which to throw things for safekeeping with revisions and rollbacks available in case of some uncertain as yet unencountered event.  Subversion was the next generation of revision control tools.  Darling of developers and bane of disk space, Subversion still had many more features and performed essentially the same task.

Now, Git is the flavor of the month, and not only has gained widespread acceptance as a standard way to “do” revision control, it’s the de-facto way to do DEVOPS in  a Puppet world.  Granted, there are those brave souls out there who have tried to stick with the older tools, but the workflow and the “glue” between all the various components therein.  Hence, this post.

What is Git, really?

Git was developed by Linus Torvalds for Linux Kernel collaboration.  He needed a new revision control system akin to the previously used BitKeeper software that was unencumbered by copyright and able to handle the unique distributed development needs of the Linux project.  So, rather than try and use someone else’s project, he collated what was needed and developed the project himself.

Now, Git is used both privately and Publicly throughout the world for many projects.  Git is lightweight and works in a more efficient manner by moving changes via diffs rather than whole repositories, allows developers to maintain and manage an entire repository on their own systems either connected or disconnected from the Internet.  Then, they can “push” all their changes back to the central repository as needed.

Enter GitHub

For our purposes, we’ll specifically be working in GitHub.  GitHub is a project offering web-based hosting of your code that you can source from anywhere.  GitHub offers public and private hosting and a spate of other related services to development collaboration on the Internet.  If you do not have a GitHub account, you’ll need to surf on over to the site and sign up for one.  It’s free and it’s fast, and I’ll be using and sourcing it heavily as this series continues.

Basic Git

Git itself is available on most modern platforms and can easily hook into GitHub for our purposes.  I will be mostly referring to command-line usage of git, but you will find quite a bit in the way of tools, frontends, and “helper” apps for Git that you may or may not wish to leverage as you learn and incorporate Git into your workflow.  In the meantime, stick with me on command-line work.

When you install git on your unix-like platform, it will drop a few binaries.  The one we’re most interested in is the git binary itself.  It’s very simply designed and has a very straightforward set of options you can get from the command line by simply typing “git” with no options, or “git help”.  The output is below:

We’re most interested in a small subset of commands for our purposes here.  They are add, commit, pull, push, branch, checkout, and clone.

I will be referencing one particular way to “do” git which works for me, but as with anything TMTOWTDI and YMMV.

GitHub Portion

I am going to assume you’ve created a GitHub Account.  When you create your account, you’ll have a unique URL assigned to you based on your username.  Mine, for instance, is https://github.com/cvquesty/.  The basic interface to GitHub is rather straightforward and looks like the following:

mygithubThe interface keeps track of all projects you’re working on, the frequency with which you commit or otherwise use your repository, and (most importantly), a centralized server that is storing those projects you can source from any internet connected system.

Make a Repository

In the upper right-hand corner of your screen, you’ll notice a “+” symbol.  Let’s click that and create us a new repository.  You’ll be presented with a dialog to name and describe your new repo.  I’ll use the name “sample repo” and the description “Sample Repo for my Tutorial” with no other options other than the defaults.  (we’ll go manually through those processes shortly).  After creating the repository by clicking “Create Repository”, I’m presented with a page that has step-by-step instructions on what to do next.  I’ll include that here for you.

myrepo

As you can see, you have your repo referenced at the top by /.  You have instructions on how to use the repo from both the GitHub desktop client and the command line and some special instructions for if you have it locally on your system, and are just now uploading that content into this repository you’ve created to hold it.  We’re interested in the command line instructions.

A Place to Git

On my system (a Mac), I have Git installed by default and I have a directory in my home directory simply called “Projects”.  Under there, I have a “Git” directory.  ALL of my work in Git goes here.  This is not a hard/fast rule.  I just chose it as my location to place all my git work so it is centralized and all collected together.

What we’re going to do next is to configure Git, create a location for our repo, make a file to commit to the repo and then push that file up to GitHub to see how that workflow works.  Let’s get started.

Configuring Git

Since Git is personal to you as a user, you need to let Git know a few things about you.  This gives your git server (in our case GitHub) the information it needs when you’re pushing code (like your identity, default commit locations, etc).  First, your name and email:

git config –global user.name “John Doe” git config –global user.email you@yourmail.com

You’ll only need to perform this once.  There are quite a few options and you can read up on those here at your leisure.

Next, create a location for your new repo.  I chose my aforementioned directory and created the location

/Users/jsheets/Projects/Git/samplerepo

for demonstration purposes.  From here, though, we can take up with the instructions on the GitHub page displayed after creating your repo. I’ll reproduce that here for reference:

cd /Users/jsheets/Projects/Git/samplerepo touch README.md git init git add README.md git commit -m “first commit” git remote add origin https://github.com/cvquesty/samplerepo.git git push -u origin master

If all has gone well, you have now created an empty README.md file, committed it to your local Git repository and then subsequently pushed it up to GitHub.  You’ll note that we added “origin” as the remote and then we pushed to “origin” in a thing called “master”.  What’s that all about?

GitHub (and git) refer to their repo location as “origin”.  This becomes handy when you start pushing between remote repositories and from remote to remote to GitHub, etc.  So, it makes sense to name GitHub functionally as well as it’s assigned domain name.  By saying “origin”, we’re making GitHub the de-facto standard center of everything we’re doing.

Next, we refer to “master”.  What is that?  Simply stated, we’re pushing to a “branch” called “master”.

Branching

Branching is a method by which you can have multiple code “branches” or “threads” in existence simultaneously, and Git is managing them all for you.  For instance, you may wish to have one code collection only for use in production systems while maintaining a separate one for development systems.  In fact, you can create a random branch with a bug name (bug1234, for instance), commit your changes to that, test it, and push it to origin, then pull it down to all your production hosts, solving a big problem in your site or codebase.  Better yet, if it all works great and you’re happy with it, you can “merge” that bug back into your main code repository, making it a permanent fixture in your code in whatever branch you like. (or even all of them!)

When you first create your repo, GitHub makes a “main” branch for you automatically, and calls it “master”.  So, by utilizing the command above, we’re telling Git to push our code (in this case, README.md) to our origin server (GitHub) and put it in the “master” branch.

While we’re on the topic, let’s create two more branches so we can get the full hang of this branching thing.  (Hint:  It’s core to how we integrate this into Puppet).

Makin’ Branches

As I said before, GitHub creates a default “master” branch for you.  If, from your local repository location, you type “git branch”, Git will list a single branch for you,

git branch * master

This tells us simply, what branch you are currently in.  Now, let’s run two commands to create new branches to be tracked by Git.

git branch production git branch development

Now.  Run “git branch” again:

git branch development * master production

As you can see, your other branches are now visible when running the command.  If you have color, you may notice that the “master” branch is a different color than the others (based on your settings).  If you do not have color, the asterisk denotes what branch is active as well.

Checkout and Commit, Branch and Merge

We have our repository and we have our branches.  We have a single README.md in the current directory, and we are ready to roll committing code and pushing it into our repository.  Let’s perform a simple experiment to get the “hang” of how the branches work and how to switch between them as needed.  Since we’re in “master”, let’s edit our README.md to reflect that by placing a single word in the file “master”. (use vim as discussed in our last tutorial).

Once you’re done with your edit, you’ll see that the text is in the file.  you can edit it and you can cat the file and see the contents, but if you view the file up at GitHub, that content is not there yet.  Some sort of way, a mechanism must be used to put that data there.  Well, there is such a process, and it is a two part process.

Recall I mentioned that one of the features of Git is that you can have a complete repository local to your machine.  you can work on that repo and make all sorts of changes completely disconnected from your server (in our case GitHub… “origin” as it is named to Git).  Therefore, in reality you are dealing with not one, but two repositories.  The local one on your machine and the remote one at origin.  (remember the “git remote add origin” above?)

So, to finalize your changes locally, you must “commit” them to your local repository as “final”.  THEN, you can “push” those changes into your main server (in our case GitHub).   We did as much above with our procedure where we did the commit with a message, and then a push up to origin.  However, now that we’ve made changes locally, they are not yet reflected at GitHub.  Logic would dictate another commit is in order:

git commit or git commit -m ‘Some message about your commit’

As you can see, there are two routes you can go.  If you simply supply the “git commit” without any options, you will be brought into the system text editor you (or your OS) has configured in the $EDITOR environment variable.  Most platforms use “vi” or “vim” for this, but I have also seen “pico” used in some distributions like Ubuntu Linux.  In any event, you can edit the file by placing your comments in.  After exiting the file, saving the content, the commit will be complete.  If, however, you do not put anything, git will not commit the changes.  This is to enforce good coding practice by requiring some notes about what a committer is doing before making the changes.  It’s a highly recommended workflow to follow.

Once your commit is complete, phase 1 (local commit) is over.  You can commit over and over, as many times as you like.  you are a full, local repository.  In fact, I’d encourage many commits.  Commit when you think about it.  Commit before you walk away from your system.  Commit randomly for no reason in mid-workflow.  The more commits you have, the less likely you are to lose work.

Finally, to get the data up to GitHub, we need to “push” that data off your repository and into your “origin” repository. This is quite simple, and you’ve done it before:

git push -u origin master

Sometimes you may wish to not keep specifying the location you’re pushing to.  If so, you can set a default location for each branch.  Git will tell you just how to do that if you forget the “-u location branch” option.  Let’s say I’m in my aster branch and I simply run a “git push”.  Git will tell me I did something wrong, but will also tell me how to eliminate that problem:

fatal: The current branch master has no upstream branch. To push the current branch and set the remote as upstream, use

git push –set-upstream origin master

“fatal” seems a little melodramatic since Git gives you the answer as to what to do right there.  All you need to do is set the default target once with that last line, and from that point forward, you only need type “git push” when pushing to GitHub.  Hint:  I do this in ALL my branches at create time.  It saves a lot of typing over time, and like any good Sysadmin, I’m lazy.  :)

So, now I’ve got multiple branches that need this setting, but I’m still stuck in “master”.  How do I get to “development” or “production” to perform the same tasks?

Git provides a “checkout” command.  What you’re saying with “checkout” is:  ”Git, I want to be working on branch “x”, and I want you to make that my current branch.  if there are any differences between that branch and the one I’m on, please make those changes on-disk for me so I can exclusively be working in branch “x”“.  A little verbose, but you get the point.  So, to move to the next branch and do all the wonderful things we did in “master” above, we perform:

git checkout development edit README.md to say different text git commit -a -m ‘editing README for development branch’ git push –set-upstream origin development git push

If all has gone well, your development README.md file is now changed and pushed into GitHub.  What about “master”, though?  Well, let’s take a look:

git checkout master cat README.md

If all has gone well, the contents of README.md are back to what was in your “master” branch.  By checking out “development”, it’ll change back to the new content there.  As a test, checkout the “production” branch, change the README.md file, commit it, set your upstream push target and then push the contents to GitHub.

Now you’re cooking with gas.

Conclusion

This is a simple tutorial to get you started with Git & GitHub.  There are MANY tutorials and books that can make you into a Git expert, but are way outside the scope of this humble little blog.  let me provide a few of those for you here:

Git Help Git Book GitHub Help

This documentation should be more than enough to get you moving and well underway with Git ins-and-outs for committing Puppet code and using r10k to interface with and distribute that code around your environment.

Why All the Vim?

| Comments

As we move on through the last post on Vim, you may ask yourself why I’m remediating all the way back to text editors.  Well, as you’ll see over time, this is all about workflow; Building yourself a detailed workflow by which you can write code, syntax check, commit to revision control, deploy to your Puppet instances, and duplicate that workflow across all your environments.

The text editor itself, while important, is just a tiny part of a much larger picture I hope to cobble together over time.  So, let’s begin to push forward with our coverage of Vim.

Plugins and Syntax Highlighting

I went through the beginnings of Vim to give you a starting point and some basics in the event you have no experience in the Vim world.  One would assume that since you’re on a Puppet/Dev-Ops-y sort of page, all this is old news, but we do have completely “green” readers from time to time, and I didn’t want to leave them out.

The main goal in getting Vim in the picture was to bring you to the point where we start looking at our code and knowing what we’re dealing with at a glance.  As you begin to work in the field, whether coding in Perl, Python, Shell, or the Puppet DSL, there are some conventions out there all designed to help you and smooth your workflow.  Of these is syntax highlighting in code editors in general, but (for our purposes) in Vim specifically.

Take a look at this screen:

nosyn

While the code is well formatted and everything seems ok, were there any issues in this document, you’d never know it.  From syntax issues to missing elements, none of this is automatically highlighted to you in any way.  Enter syntax highlighting…  Look again:

syn

Much nicer, no?  Were there any missing elements, you’d see something amiss in the document.  The colors would not be organized according to element type, and odd things would be displayed in the page.  Let me “break” the file for you…

broke

You’ll notice that on line 4 something is amiss.  If you compare the two colored instances, you’ll see that your eye is drawn to where things begin to be different.  Best part for quick and easy glancing is that the entirety of the file after that one mistake now looks “wrong”.  Ease of view.

How Can This Help With Puppet?

As luck would have it, Vim has a plugin engine that allows you to have pre-built templates that syntax highlight code for you in a predetermined way.  It “recognizes” your code type, and highlights accordingly.  The basic plugin structure for vim lives in your home directory and in the “hidden” .vim directory.  Under this directory you can have a number of wide and varied add ons to vim.  We’re just going to talk about plugins.

By default, you don’t have anything in this directory.  You usually have a .vimrc file and a .vim directory in your home directory location, but that’s about it.  The “magic” happens, though, when you add a few pieces.  Those would include a “.vimrc” file which will turn on syntax highlighting, and then a Puppet vim plugin that sorts all the language elements and colorizes them for you.

In your home directory, if it doesn’t already exist, create a .vimrc file with a single entry:

syntax on

This will instruct vim to syntax highlight, and to be aware of any highlighting plugins that may live in the .vim plugin folder.  Next, place this file into your .vim plugin directory.  If it does not yet exist, create it.  It lives in ~/.vim/plugin.  Once you load up new puppet manifests, it will recognize the type and begin to highlight the code according to the defined convention from the puppet.vim file you just downloaded.  If you have any issues, look at the vim plugin reference here.

Now you’re ready to work with Puppet files like a pro!