(Trying to do) Code package deployment to Episerver DXC-S

I have spent way to much time trying to set up code package deployment when using TeamCity together with Octopus deploy. Read this to avoid following my path to failure :-). Well, not complete failure, I did learn some new things about both PowerShell and Octopus…

Update: “DXC-S” is now known as ” Episerver DXP”

Episerver have created a PowerShell module to simplify code package deployments.
To install the EpiCloud module in an Octopus step I just:

Install-Module EpiCloud -Scope CurrentUser -Force

Copy/pasted from this blog post. When uploading a package, the EpiCloud module has a dependency to the module Azure.Storage (version >= 4.4.1). Our Octopus server had an older version of that module. So I just installed Azure.Storage in the same way as above.

DO NOT DO THIS IF YOU ARE USING OCTOPUS DEPLOY!

Azure.Storage has a dependency on AzureRm.Profile, which means that AzureRm.Profile was also updated, and other Octopus steps are (indirect) using that module. I haven’t investigated all details but the result was that I broke deployments for other projects on the Octopus server! Colleagues needed to work extra. Clients got frustrated. I’m sorry 🙁

thumbsdown

I guess I wouldn’t get this issue if using Azure DevOps since the server isn’t shared between builds/deployments(?).

I really wanted to be able to do code package deployments, so I continued on a separate Octopus “worker” (additional server that might mean additional license cost for Octopus). On this worker I could do whatever I wanted without affecting others. Not a solution I recommend, but for now I just wanted to get the deploy working.

I continued with setting up the deployment. First thing I did was to “refactor” the config transformation files for Preproduction and Production environments. Before, the transformation files relied on the “transformation chain” that is done when doing regular deployments to DXC-S. When doing code package deployments, transformations are done in a more regular fashion, but you need to make sure you transformations files will work both when the transformations are chained, and when they are not.
This took quite some time (and I’m not sure I got it right).

Next thing was to rename the nuget file. The uploaded nuget package to needs to follow a naming convention. Not a big deal, I just renamed it in the nuspec file (that OctoPack is using).

The content of the uploaded nuget package must follow a certain structure. This structure is not the standard structure when doing a web deploy. I could make this structure change using the nuspec file, but then the package would only work for the code package deployments, and not the other environments… So, I decided to use PowerShell to restructure the content in the nuget package created by TeamCity. That means rename the nuget file to have a .zip extension, create a folder called wwwroot and unzip the contents to that folder, zip that folder, and finally remove the .zip extension from the new file. Phew!

The package created by TeamCity contains all transformation files for the different environments, together with other files specific for an environment (e.g. Episerver license files). The transformations are performed by Octopus (except for DXC-S preproduction and production). Before the transformations are done, I do some variable substitution in some of the transformation files. Those variables could be sensitive keys that you don’t want to check in to source control. I did not find an existing Octopus step that would only perform the same variable substitution that is done in the Web deploy step. So, I would need to write custom PowerShell to perform the substitution…

By this point I gave up. I dont think I would have gotten a maintainable solution. What would I think if I was to take over a solution like this from someone else. A very non-standard way of doing things, with quite a lot of custom scripts. A custom script might contain bugs, and probably needs to be maintained by someone.
I would not feel proud of this solution.

  • The config transform files needed to work both as “chained” transforms and when doing a normal transformation. This leads to quite messy transformation files.
  • Needed to have custom script for substituting variables in files.
  • Needed to have custom scripts for re-structuring the nuget package.
  • Needed to write custom scripts that duplicate functionality already built in to the regular Web deploy step in Octopus.
  • Needed to install additional PowerShell modules, that apparently can affect other deployments on the same Octopus server.

 

I really like to hear if anyone have set up code package deployments using Octopus, and what approach you used. It feels like I constantly was working against the tools, I might have missed something obvious.

My humble suggestion to Episerver is to maybe take another approach. I don’t know if it’s possible, but maybe you can make a PowerShell module that works something like this:

  1. Initialize a new deployment. This perform all the tasks that is done before the actual deploy begins. I don’t know all the details, but takes a backup of db, create slot, turn of auto-scaling, etc.
  2. Do the actual deploy to the slot using regular web deploy (PowerShell module is not used for this step). This means that all standard functionality of Octopus (or Azure DevOps) can be used.
  3. Finalize deployment. Perform warmup of slot, swap slots, turn on auto-scaling, etc.

 

 

6 thoughts on “(Trying to do) Code package deployment to Episerver DXC-S

  1. I did not have any problems except the regular PowerShell hell of trial&error with “simple” syntax such as input variables but then again I only used onprem TFS and sort of replaced Octopus completely with PaaS Portal to see which package version is where, “Go Live” for Prod and use for source based deploys if someone still needs those. I will probably use OctoPack shortly but for now I’m using the default NuGet Packager step and do a little cleanup before hand.

    I did not turn to Azure.Storage module but instead leaned on entire Az module on the shared worker. Haven’t noticed anything break elsewhere so that might be a suggestion.

    • Thanks Johan! Do you have Az.Storage installed on the worker then? I thought you couldn’t have Az modules side-by-side with the AzureRm modules that is bundled with Octopus?

      My main point is that Octopus Deploy and Azure DevOps have been developed over many years for a reason. What makes an Episerver DXC-S deployment so unique that those tools couldn’t handle it?
      I guess Episerver want to simplify the deployment, so that we shouldn’t have to set up a complex deployment process. That is an awesome goal, but currently I don’t think it is simpler with a lot of custom scripting.

      • Not sure on the exact specifics, I just “ordered” Az and EpiCloud installed in a global way on this setup. I think AzureRM was removed and replaced with Az completely.

        I also had it easy not needing to create packages for other types of environments than DXC-S with EpiCloud deployment but of course also faced the Config Transform work.

        No custom PowerShell except some TFS variable handling and logging but my setup is just sequential Add-Start-Complete with -Wait switch everywhere except when specifying Production as target where I require PaaS-portal validation and click to complete.

  2. Thank you for this great post Erik! This is super useful information for us to plan improvements of the Deployment API.

    Some of my thoughts regarding this:

    The config transform files needed to work both as “chained” transforms and when doing a normal transformation. This leads to quite messy transformation files.

    I’m slightly surprised about this, I thought most transforms would basically replace existing values or do remove/add. That said, improving environment configuration is something we’ve discussed and want to improve, can’t promise any dates yet though.

    Needed to have custom script for substituting variables in files.

    I guess this is sort of related to the above. Point taken, we’ll definitely look into what can be done to improve this.

    Needed to have custom scripts for re-structuring the nuget package.

    This feedback makes sense as well. The reason we have a slightly different structure is to for example support applicationHost transforms within the service. But I’ll make sure we look into if we could potentially support both to simplify usage if the packages are used to target non-DXC environments as well and things like applicationHost-transforms aren’t used.

    Needed to write custom scripts that duplicate functionality already built in to the regular Web deploy step in Octopus.

    Also related to the above points regarding improving configuration I guess, valid feedback, let’s see what can be done.

    Needed to install additional PowerShell modules, that apparently can affect other deployments on the same Octopus server.

    I completely see why this is messy. The only reason we have that dependency is to be able to upload the code package to blob storage using a SAS-link which old AzureRm-modules doesn’t support (we added support for the oldest version that had this capability).

    In general though, I guess everyone should try to move away from AzureRm anyway since even the latest version of that will reach EOL fairly soon and is not under active development.

    Again, thanks a lot for posting this! This feedback is really valuable for us and will be used when we plan further improvements of this feature.

    • Thanks Anders. I’m glad to hear that this might be of help.

      For a client we have multiple integrations to external systems, the keys for these integrations are sensitive data that we don’t want to have in git.
      These keys are stored as appsettings in web.config. For local dev. environments these keys are in a separate file (not commited to git), for the other envoronments these keys are added in the deploy step as config transforms.
      So, for Integration we have a bunch of appsettings that are inserted in web.config. In a regular deploy the same appsettings are inserted also for the other environments. But for a DXC-S chained transformation the appsettings already exists (from the Integration env.), so they are only updated.
      To support both regular and chained transformations the transformations must be able to handle both if the appsettings already exists (and should be updated), or if the appsettings should be inserted.

      Regarding the structure in the package: You could keep the current structure as a default structure, but it should be possible to use another structure by pointing out paths (ex. wwwroot = “/”, applicationHostTransforms = “/myfolder/”).

      Not being dependent on external modules would be a big plus. Uploading to blob storage could be done using a REST API instead(?).

      • Thanks for clarifying!

        That makes sense, you’re probably already doing it now, but for the transforms we apply during deployments we usually do Remove and then InsertIfMissing to support both scenarios which works well.

        But as you say there are other reasons for improving configuration management in general so hopefully we can address this in other ways as well in the future.

        The package structure suggestion could definitely work as well, well take that into consideration!

        Yes, I agree. The API is however a bit more complex than one would think when it comes to for example handle uploading of files above a certain file size which is why we wanted to reuse Microsofts own implementation instead. But if having that dependency is causing issues we might need to reconsider of course!

        Thanks again for taking the time to explain these scenarios!

Leave a Reply