I’m new to Azure Resource Management (ARM) and Desired State Configuration (DSC), albeit not new to JSON nor PowerShell. I recently had the task to migrate our Azure Security labs to a pure Azure-based environment which meant learn ARM and DSC really quick. I had to setup VMs, build a DC, create users, make the VMs ‘insecure’, stage malicious payloads, create scheduled tasks, and much more, all so we could illustrate attacks to drive awareness and show our products detecting nefarious activities.
This blog is meant to be a reminder to myself (and you!) on the lessons learned I made.
#1: VSCode is awesome
In the past 2 years I’ve began to love VSCode especially when dealing with PowerShell (C# is another story, for now; the new remote-development feature is amazing though so maybe soon that needs to be rethought!). I did try to use Visual Studio Enterprise for writing my ARM/JSON as it does have some right-click capabilities but it just tried to do too much and it didn’t let me really understand what was going on or optimize my deployments.
I already hit one of the biggest lessons learned in this journey… VSCode is amazing. It has the right extensions to make this experience pretty painless, including being able to ‘tab complete’ my JSON-most of the time-and it does very well with DSC resources. It doesn’t try to do too much, and it stayed out of the way to allow me to really learn and optimize what I was trying to do. Sure, I could have used Visual Studio this way, without right-clicking, but VSCode is clean and is never meant to be that full IDE. In this case, a text-editor on steroids was exactly what the doctor ordered.
#2: Modularize and iterate, iterate, iterate
Starting off small, testing, and seeing if that works is great not just for debugging but learning how to optimize your ARM template. If you try to tackle too many steps at once and then test, something can break, and the error-outputs aren’t always that obvious. I couldn’t tell you how many times I did something super stupid like called a Parameter in my JSON “cred” while my PowerShell expected “creds”, and the error message I got was something related to PSCredential not being passed security.
Had I stuck to this design principal, I would of saved myself a few hours.
But then the question becomes, how can you iterate so fast with, what eventually become, large and complex deployments? This brings me to #3.
#3: ARM Deployments can target the same Resource Group that just failed!
This is huge. Don’t think that every ARM deployment must hit a net new Resource Group. In fact, it's much faster to make fixes and redeploy into your existing Resource Group that you just tried a deployment in.
What happens behind the scenes I had no idea about…?
Essentially the Resource Providers behind the scenes will check the new ARM JSON and configure any of the differences… meaning if you were on step 10 of building out the lab, the first 9 steps were successful and the last one wasn’t, you wouldn’t have to actually ‘redeploy’ the first 9. Azure is smart enough to know when something cchanged,and it needs to redeploy that resource.
Because speed is important, I finally turned to PowerShell vs the Custom Template UI in the portal.
New-AzResourceGroupDeployment is your friend!! This module can take your JSON file, even if locally on your computer still, and start your Deployment.
However, this can get a bit tricky for DSC! Which leads me to trick #4.
#4: VM Extensions don’t delete themselves in a re-Deployment
If I had to fix the DSC itself (we will get to this…), I didn’t realize that if the Extension already existed on the resource, it wouldn’t actually redeploy the new DSC module but just re-use the one already there.
That said, when fixing DSC, always do a
Remove-AzVmExtension. In fact, this PowerShell cmdlet, if its the Az version, can tab-complete your VMs and Extensions for that resource making it super-fast and painless.
But how do you even do DSC… I didn’t want to use a Push Service or anything, I wanted it to be clean and have no dependencies as this is again, for a Lab.
#5: Publish-AzVmExtension is your friend
This cmdlet (
Publish-AzVmExtension) will make your life so much easier. You can actually keep files on disk, not worry about 'access tokens' or 'access keys' when learning all this other stuff-besides who wants to learn everything in parallel, that goes against the modular design we talked about!
Here is what I used a ton of:
Publish-AzVmExtension -ConfigurationPath <myfile.ps1> -OutputArchiveFile <myfile.zip>
This allowed me to create the compiled DSC and keep it on my system so I can later push it on Git. But if I’m not using ‘access tokens’, I must be passing parameters from my JSON to my DSC modules, insecurely right? Nope, and this brings me to my last major lesson learned.
#6: protectedSettings/configurationArugments to the rescue
There isn’t much documentation on this as ARM templates have been evolving almost monthly and so there is a lot of old literature out there on how to do this.
However, I like to KISS (keep it simple), so straight to the documentation I went… but sadly even this had little documentation since it was so new.
What I learned was I can securely pass parameters (such as ‘securestring’ or ‘PSCredential’) leveraging protectedSettings/configurationArguments/<variable>
More impressively, I can pass entire objects-and if the object had a ‘UserName’ and ‘Password’, I could cast it to a PSCredential.
For example, the below code let me pass this ‘AdminCred’ in a secure manner, without using complex accesstokens or storage accounts, to my DSC modules.
I’ve had to learn a ton in the last few weeks, from learning how DSC works (including what “compiled PowerShell” meant) to ARM templates and connecting the two.
Hopefully, you can learn from me and build powerful ARM Deployments with custom automation via DSC with a lifecycle that really allows you to stay agile (and secure!).