How to backup and restore tables in Azure storage

There is currently no built in solution for backing up tables in Azure Storage, but we can easily do it with the help of a tool called AzCopy which is provided by Microsoft. Below is a PowerShell script I built to simplify the process of backing up and restoring. It’s a two-step rocket: First it enumerates all your tables and saves a CSV-file with their names. Then it uses this file to perform the actual backup (and restore if needed). This way you can easily edit the CSV-file to “configure” what to backup.

This script is also available at my GitHub repo.


Posted in Development, Tutorials | Tagged , , , , , | Leave a comment

Extending the JavaScript console with history recording

The browser JavaScript console is a web developers best friend. Using console.log()  is essential for debugging your web apps. But there are a few annoyances. Two things that bother me for a while are:

  1. Reloading the page clears the console. Yes, there are settings for keeping the log, but personally I prefer to keep this turned off.
  2. Internet Explorer and Edge will only print to the console once the dev tools window has been opened. The console is always empty when opened, so anything you logged before this is lost.

Fortunately JavaScript is a very flexible language. Let’s make the console better!

Design goals:

  • Record all calls to the console’s log() , error() , warn()  and info()  functions.
  • Recall the console history by calling our own console.history()  function.
    • console.history()  shows the console history since the current page load.
    • console.history(true)  shows everything from the current session.

Here is my solution (also available at my GitHub tutorial repo). I have tried to annotate it to explain what is going on.


Posted in Development, Tutorials | Tagged , , , , , | Leave a comment

Solution to Azure AD App not able to use given permissions

A client I’m working with had trouble adding permissions to their Azure Active Directory Application (AAD App). We added application permissions to allow the app to read all users’ full profiles, read directory data and read O365 unified groups, but no matter what we did our code would get an “insufficient privileges” answer back from the Microsoft Graph. We even added a brand new app, but it too had the same problems, indicating that the problem lied deeper than the app permissions.

Reaching out to Microsoft support they clearly had a hard time figuring out what was wrong, but finally their engineers presented the following solution that worked:

  1. Open PowerShell
  2. Connect-MsolService (sign in with a Global Admin account)
  3. Remove-MsolServicePrincipal -ObjectId [GUID] -TenantId [GUID]
  4. Start an InPrivate browsing window
  5. Open:[GUID]&prompt=admin_consent

This is supposed to remove the Service Principal and following the admin consent workflow a new Service Principal should be created and inherit the permissions given to the application object.

I hope this is helpful to someone in the same situation.

Posted in Development | Tagged , , , | Leave a comment

I stopped using Google Analytics. Here’s why.

For many years I’ve been using Google Analytics (GA) on, but more and more I started questioning myself why. So a month ago I finally cut the cord. Here are my reasons.

  1. The first reason is simply that I don’t really have any need for advanced analytics. Sure, it’s always fun to see how many visitors your site has had and what the most popular pages are, but for me that’s where it ends. Google Analytics is way to complex for me and my needs, and I never felt that I really understood how to use it properly. Usage statistics sounds very enticing to website owners and stakeholders. And GA always pops up as the first suggestion. But I’m going to be so bold as to suggest that most GA users actually do not need such an advanced tracking system. They would be just as happy with a more simple system. Heck, they might even be more happy, since a simpler system that they actually understand and can fully use would be of more value to them!
  2. The second reason is that I don’t like being tracked by Google (or other “data companies”). I even block the GA JavaScript in my browser. I don’t necessarily think it will do my any harm to be track. But the more I think about how much information Google has about everyone, the more scared I get. That being said I do not mind individual websites to track me. The problem with Google Analytics is that they track an enormous amount of sites, and by combining this data (and data from their other services) they get an eerily complete picture of you. If you choose Google Analytics because it’s free, think again about what you are giving them. It’s not healthy to let corporations have access that kind of data! Considering my stance on this, it’s simply not right to subject my website visitors to something I myself despise.
  3. Third. If I am to track my website users, I want to own the collected data myself. Only then can I offer a real privacy policy. Only then can I dispose of the data if I need to. Only then can I (at least in theory) move my data between services. With Google Analytics, who owns your data? I’m not sure. But I do know that you cannot export you data from GA. You can manually download individual reports, but not all you data.

There are lots of alternatives to Google Analytics out there. Some are cloud service, some are self hosting software. Some are free and even open source, some are paid. For some sites GA is no doubt a very powerful and suitable service. But not for everyone.

Note: At the same time I removed Google Analytics I also stopped using AddThis which I had been using for some years. Turns out AddThis is nowadays owned by Oracle Corporation, another major player in the data field…

Posted in Opinion & Thoughts | Tagged , , , , , , , | Leave a comment

Heads-up if you use ADAL.JS on a site added to the “Trusted Sites” zone

When working on a SharePoint Online solution that is using the ADAL.JS library for authentication, a customer reported strange problems. In our solution, our site sends the client to the authentication authority (Azure AD), which authenticates the client and forwards it to the reply URL which in turn returns the client to the original page in a logged in state.

The problems we saw were mainly that the clients tended to get stuck in an authentication loop, where the client seemed to correctly receive a token from the authority, but our ADAL-based solution did not see the authenticated user and thus sent the browser back to be authenticated, in a seemingly endless loop. The behavior was quite erratic though – sometimes the clients were actually able to log in, either directly or after several tries. Other times it eventually failed and stopped somewhere in the “loop”.

The oddest behavior was however that the browser tab (or whole browser if only one tab) could sometimes simply just close by itself!

I experienced this myself on IE11, but the customer reported that same happening in Chrome too. A clue was that it only happened on the customer’s computers when logged in through their VPN, and only against their own SharePoint Online tenant.

Eventually we cracked the problem. Turns out that the customer had put SharePoint Online ( in the Trusted Sites zone under Internet Options in Windows. But authentication with ADAL.JS relies on a hidden iFrame working against a different URL. Because the site and iFrame belonged to different security zones, they were no allowed to share cookies.

The solution was to simply put the authentication authority (https://* in the Trusted Sites zone as well.

This weakness and solution is actually stated in the ADAL.JS readme, but it is easy to miss or forget, and quite hard to figure out the cause if it happens. Hopefully this post will save someone from this agony.

PS: Need to view all trusted sites, but the list can not be scrolled due to being controlled by a group policy? Superuser has the answer!

Posted in Development, Tips | Tagged , , , , , , , , | Leave a comment

Downloading OneDrive documents as PDF

As part of automating a build process I needed a way to download the product documentation from OneDrive and save as PDF files for shipping with product. The result is this PowerShell script with two cmdlets, one that takes an array or OneDrive files to download, and one that converts the downloaded DocX files to PDF (using Word). Here is the script (also on my GitHub):


Posted in Development, Tutorials | Tagged , , , , , | Leave a comment

A drag-and-drop GUI made with PowerShell

The script below shows how to create a simple WinForms GUI in PowerShell, where you can drag and drop files and folders and then process these with PowerShell commands when you click the button. I’ve pieced this together using various online source and my own trial and error, and figured it might be useful as a base for others who need to create an interface.


I probably won’t win any design prices with this one 🙂

Tip: If you are unsure of control names or how to use properties on a control, remember that you can always launch Visual Studio with a WinForms project and then inspect the Properties window and Form1.Designer.cs file for “inspiration”.

Here is the script (or get it at GitHub).

Posted in Development, Tutorials | Tagged , , , | Leave a comment

Think twice before uploading assets to Sharepoint Online

Here’s a heads up if you are uploading assets such as CSS files to SharePoint Online. In a solution I’m working on we have an “Alternate CSS” file that we upload to the master page gallery. It contains the following line:

Nothing special there. Everything was fine until one day when the icons stopped loading. Debugging led me to the CSS file, in which above line had been changed:

Notice that “//” has been added to the url. When I saw this I assumed I had made some error. I checked my original files and upload procedures, but could find nothing wrong. Provisioning the file again fixed the problem everyone was happy.

But a month or so later it happened again. Could SharePoint be modifying the file? We contacted Microsoft support and asked if they knew anything about it. They confirmed that they do indeed modify files in SharePoint Online through a time job that runs before site colection updates. Basically their response was: Deal with it.

So how *do* we deal with this? What we did was to replace “//” with “https://”. So far this seems to be working, but we are aware that it might break at any time. To be really sure you will want to skip @import statements (and other URL:s) in files you upload to SharePoint. But we don’t know what kind of files Microsoft takes the liberty to modify, nor if they process all locations or just the master page gallery. The only safe solution is probably to skip SharePoint altogether and move the files to a separate web server that you are in control of.

What we have to remember is that SharePoint Online is a cloud service, and as such, the operator is in complete control of changes. There are a lot of hidden rules and jobs going on behind the scenes in SharePoint Online that customers are not aware of. Microsoft does not necessarily tell us when changes happen or what the implications will be. And they might change the rules at any time. So we need to be cautious, even when dealing with with supported features such as Alternate CSS.

Posted in Development | Tagged , , , , , , | Leave a comment

Converting a CSV file to RegX with PowerShell

If you need to convert a CSV file with terms to a RegX file for use in your .Net project, you can use this simple PowerShell script:

The script assumes your CSV file contains two columns, “Name” and “Value”, such as this:

You can use Excel to export such files. You may need to change the -Delimiter parameter in the script to match your locale (Why? See my post on working with CSV files in Excel).

Posted in Development, Tips | Tagged , , , | Leave a comment

Working with enterprise custom fields in Project Online

I was tasked with with writing a utility for setting Enterprise Custom Fields on User Resources in Microsoft Project Online. The fields got their values from Lookup Tables. Since is was Project Online, the client side object model, CSOM, had to be used. Without previous experience of MS Project, this proved to be tricky.

Here I present a C# class that should help anyone in the same situation to get started. It is a simple console application that show various ways to interact with MS Project, and in particular how to extract available custom field values and and set on user resources. Below are some interesting snippets of the code showing how to work with Project and CSOM. The complete class can do more and is available at the bottom as well as on GitHub.


Project Online is a (very) customized SharePoint site collection. We can use same techniques when working with it as with any SharePoint Online site. As always, we need to begin with getting a context. In this case we request a ProjectContext which is just a normal SharePoint context object with additional properties:


As always in CSOM, we need to explicitly load all resources we need to access:


Listing enterprise custom fields and lookup table entries is simply a matter of iterating over the respective properties:


A user resources is not the same as the user object. It seems that we can’t trust that all users emails are synced to Project Online. To get around this, I load user resource using the login name instead of just the email, by simply appending the claims prefix to the email:


Now we can get the custom fields from the user resource. Note that fieldValue is a string array, because it can be multi valued.


To set a custom field with the value of a lookup table we need to understand the internal structure:

  • Each value in both the custom fields and lookup tables have a GUID, an internal name and a display name (also called simply name or full value). (The internal name is actually a concatenation of the GUID and a word like “custom” or “entry”.)
  • A custom field may be bound to a lookup table. We use the internal name of the lookup table entries to set such custom fields.
  • Since these are custom fields, the compiler is not aware of them. In order to set such a field we need to access it using an [indexer] together with the internal name.
  • Since field values can be multi-valued, we need to set it using a string array.
  • Finally, to persist the changes we need to update the EnterpriseResources collection, because this is where the resources are stored.

Now that we know all this, it is actually quite easy to set the field! Assuming that we already have the internal names of the custom field and lookup table entry we wish to set it to:


Putting all of this together, I made a class that can be used to read and write custom fields, and also shows how to list a bunch of information from Project sites and its users. Use it as a template to make your own solution. You need to modify it to use your own login information and GUID:s before you can use it. Again, the full Visual Studio solution can be downloaded from GitHub.


Posted in Development, Tutorials | Tagged , , , , | Leave a comment