Integration tests using Azure Storage emulator and .NET Core in Azure DevOps
I had a friend contact me about a situation that he was trying to do and I was about to have the same situation myself so I decided to tackle it. That situation is that we both need to test our code, but our code is dependent on Azure Storage. As you know, you can emulate Azure Storage on Windows using the (now deprecated) Azure Storage Emulator, or using Azurite.
Since my code is built and tested on linux, I decided on using Azurite. Azurite v3 runs in a container. Azurite v2 runs using node.js but the container is not officially available. You need to build the image yourself.
One thing I did not want to do is test my code within a docker container. I wanted my code to run directly on the host. After a few tries, I was able to complete this.
For this post, I will be using Azurite v2 as I wanted to test against the table storage. Follow along to see how this can be done.
Setup
Docker
The first thing you need is a place to host Azurite Docker images. This is because Docker now has set download limits for users and since your pipeline(s) may run many times during the day, you don’t want to bust the limit. To be able to do this, I’ve written an article. Check it out!
Code
To be able to demonstrate the behavior, I will use the following test code. The libraries that I am using are XUnit, FluentAssertions and Bogus.
First, I define a FakeEntity
1 2 3 4 |
public class FakeEntity : TableEntity { public string Value { get; set; } } |
Then, I define the actual test
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
[Fact] public async Task execute_query_async_should_return_the_expected_records() { // Arrange var tableName = "test"; var faker = new Faker(); var storageAccount = CloudStorageAccount.DevelopmentStorageAccount; var cloudTableClient = storageAccount.CreateCloudTableClient(); var dataSet = Enumerable.Range(1, 5).Select(_ => new FakeEntity { Value = faker.Hacker.Phrase(), PartitionKey = faker.Company.CompanyName(), RowKey = Guid.NewGuid.ToString() }).ToList(); // Act var table = cloudTableClient.GetTableReference(tableName); await table.CreateIfNotExistsAsync(); foreach (var item in dataSet) await table.ExecuteAsync(TableOperation.Insert(item)); var query = new TableQuery<FakeEntity>(); var data = new List<FakeEntity>(); TableContinuationToken token = null; CancellationToken ct = new(); do { var seg = await table.ExecuteQuerySegmentedAsync<FakeEntity>(query, token, ct); token = seg.ContinuationToken; data.AddRange(seg); } while (token != null && !ct.IsCancellationRequested); // Assert // remove .CompiledRead/CompiledWrite from the TableEntity object. Do not care about that // also remove eTag as per the api, when odata=minimalmetadata is set, it is not returned (see https://docs.microsoft.com/en-us/rest/api/storageservices/payload-format-for-table-service-operations) // however, using the storage emulator (windows), it is returned. Possible bug on the emulator but the emulator is deprecated data.Should().BeEquivalentTo(dataSet, option => option.Excluding(memberInfo => memberInfo.SelectedMemberInfo.Name.StartsWith("Compiled") || memberInfo.SelectedMemberInfo.Name == "ETag")); } |
Since I am using the emulator, I have to use the development storage connection string.
Azure DevOps pipeline
To run the Azurite Docker container, I create a script task that includes the Docker command
1 2 3 4 |
- script: | mkdir -p "$(Pipeline.Workspace)/dockervolumes/azurite" docker run -e executable=table -t -d -p 10002:10002 -v $(Pipeline.Workspace)/dockervolumes/azurite:/opt/azurite/folder --name azurite <yourorg>.azurecr.io/azure-storage/azurite:2.7.3 displayName: 'Start azurite container' |
Running the test
Everything looks good and dandy and I’m ready to run my test. Boom! The task fails because the test failed with the error: cannot connect to the storage. But I don’t understand, I exposed the ports? This is where understanding the Azure DevOps platform is necessary (thanks to a post on StackOverFlow!). The Azure DevOps build agent runs in a Docker container. This means that to access the container we have 2 possibilities:
- Access the container via the Docker Host
- Direct Access to the Docker Container
Both answers are good. I decided to go with the later because I didn’t want to install extra stuff.
Making the necessary changes
Pipeline file
So I need to grab the container IP address. I change the script task to the following:
1 2 3 4 5 6 7 8 |
- script: | mkdir -p "$(Pipeline.Workspace)/dockervolumes/azurite" docker run -e executable=table -t -d -p 10002:10002 -v $(Pipeline.Workspace)/dockervolumes/azurite:/opt/azurite/folder --name azurite <yourorg>.azurecr.io/azure-storage/azurite:2.7.3 ips=($(docker container inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' azurite)) containerIp=${ips[0]} echo "Container ip = $containerIp" echo "##vso[task.setvariable variable=AZS_CONTAINER_IP]$containerIp" displayName: 'Start azurite container' |
Note the ips and containerIp variable. That is where I am able to get the container IP address. Once I have the container IP address, I save it as an environment variable so that I can use it in my next task: the test task
Code
I can’t directly use the development storage account object using the predefined variable. However, I can use the actual connection string with the account name and key as defined in the documentation:
1 2 3 4 5 |
DefaultEndpointsProtocol=http;AccountName=devstoreaccount1; AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==; BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1; QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1; TableEndpoint=http://127.0.0.1:10002/devstoreaccount1; |
What I need to do is replace 127.0.0.1 with the container IP that is set in the environment variable, AZS_CONTAINER_IP
, from the devops pipeline.
To do this I need to tweak the code 2 ways
XUnit Fixture
I use a Fixture here so that I can share this between tests.
In the Fixture class, I add the following code which sets the connection string that will be used in the test
1 2 3 4 5 6 7 8 9 10 11 12 |
var isBuild = Environment.GetEnvironmentVariable("TF_BUILD"); if (!string.IsNullOrEmpty(isBuild)) { var hostIp = Environment.GetEnvironmentVariable("AZS_CONTAINER_IP"); var connectionString = string.Format( "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://{0}:10000/devstoreaccount1;QueueEndpoint=http://{0}:10001/devstoreaccount1;TableEndpoint=http://{0}:10002/devstoreaccount1;", hostIp); Environment.SetEnvironmentVariable("AZURE_STORAGE_CONN_STRING",connectionString); } else { Environment.SetEnvironmentVariable("AZURE_STORAGE_CONN_STRING", "UseDevelopmentStorage=true"); } |
The TF_BUILD
environment variable is set when the build runs on a build server, specifically TFS or Azure DevOps. Thus, the connection string will be generated from the container ip in this case.
As a fallback, if the code does not run on a build server, I use the development storage account default connection string. This is in the case the test needs, for instance, to be run on a local computer.
Storage Account
I will replace the line that uses the predefined variable, CloudStorageAccount.DevelopmentStorageAccount
, specifically
var storageAccount = CloudStorageAccount.DevelopmentStorageAccount;
with
var storageAccount = CloudStorageAccount.Parse(Environment.GetEnvironmentVariable("AZURE_STORAGE_CONN_STRING"));
This uses the environment variable that I setup in the Fixture.
Conclusion
As you can see you can now use a storage account emulator to be able to test your Azure Storage code. You can use this technique for any type of service that is exposed within a container.