Saturday, November 29, 2025

Signing ClickOnce Office/Excel Plugin using Azure Trusted Signing

Task

I maintain a Microsoft Office Excel plugin deployed via ClickOnce. It has run for several years without changes, but recent customer has requested some updates. As a result, I needed to revisit how to code-sign the ClickOnce application, given the certificate changes introduced in recent years.

I am no longer able to use the UI in Visual Studio IDE for ClickOnce Signing. 


Code-signing certificates are not cheap, so I wanted to explore whether Azure Trusted Signing could be used to sign ClickOnce-deployed Office plugin applications. I got it working, but finding a solution was more difficult than I expected.


Why the Code Signing Process Changed

The primary motivation was to enhance security and prevent the widespread misuse of compromised private keys, which were often easily stolen when stored as software files (like PFX) on potentially insecure development machines. The new requirements became effective on June 1, 2023. All major Certificate Authorities (CAs), such as DigiCert, Sectigo, and GlobalSign, agreed to adhere to these new rules. They now issue all new standard (OV/IV) and EV code signing certificates with the private keys generated and stored on secure hardware devices, such as USB tokens or Hardware Security Modules (HSMs), that meet specific security standards (FIPS 140-2 Level 2 or Common Criteria EAL 4+).

See GlobalSign's notice here 


Prerequisites: Azure Trusted Signing setup

Follow steps 1-7 from https://melatonin.dev/blog/code-signing-on-windows-with-azure-trusted-signing/.

I also reviewed the following blog posts that had some details that were helpful.

Proces for Signing ClickOnce Office/Excel Plugin using Azure Trusted Signing 


None of the above blog posts gave me a fully working solution since signtool.exe, which they are using, can not fully sign the ClickOnce application's manifest files (XML files), so I spent some more time searching on Google.

I ended up finding this issue "Added support for Trusted Signing" on Github - Sign CLI. Reviewing the files lead to the below steps.

  • Open Powershell as Administrator
  • Log in on the relevant Azure account where you have created the above subscription and Trusted Signing via the powershell command line

    az login


  • My output on the command line was something like the below

    Select the account you want to log in with. For more information on login with Azure CLI, see https://go.microsoft.com/fwlink/?linkid=2271136

    Retrieving tenants and subscriptions for the selection...

    [Tenant and subscription selection]

    No     Subscription name               Subscription ID                       Tenant
    -----  ------------------------------  ------------------------------------  -------------
    [1] *  ABCCodeSigningSubscription      12345678-1234-1234-1234-123456789012  ABC

    The default is marked with an *; the default tenant is 'ABC' and subscription is 'ABCCodeSigningSubscription' (12345678-1234-1234-1234-123456789012).

    Select a subscription and tenant (Type a number or Enter for no changes): 1

    Tenant: ABCTenant
    Subscription: ABCCodeSigningSubscription (12345678-1234-1234-1234-123456789012)

    [Announcements]
    With the new Azure CLI login experience, you can select the subscription you want to use more easily. Learn more about it and its configuration at https://go.microsoft.com/fwlink/?linkid=2271236

    If you encounter any problem, please open an issue at https://aka.ms/azclibug

    [Warning] The login output has been updated. Please be aware that it no longer displays the full list of available subscriptions by default.

  • Install the Sign CLI (for this blog post I used https://github.com/dotnet/sign version 0.9.1-beta.25379.1+ba6e717abf74a693f0f9c5e891c0e3ef624956b3)

    dotnet tool install --tool-path . --prerelease sign 

  • By reviewing the codechanges in https://github.com/dotnet/sign/pull/716/files and using the commandline help for trusted signing


    Description:
      Use Trusted Signing.

    Usage:
      sign code trusted-signing <file(s)>... [options]

    Arguments:
      <file(s)>  File(s) to sign.

    Options:
      -tse, --trusted-signing-endpoint <trusted-signing-endpoint> (REQUIRED)                         The Trusted Signing Account endpoint. The value must be a URI that aligns to the region that your Trusted Signing Account and Certificate Profile were created in.
      -tsa, --trusted-signing-account <trusted-signing-account> (REQUIRED)                           The Trusted Signing Account name.
      -tscp, --trusted-signing-certificate-profile <trusted-signing-certificate-profile> (REQUIRED)  The Certificate Profile name.
      -act, --azure-credential-type <azure-cli|azure-powershell|managed-identity|workload-identity>  Azure credential type that will be used. This defaults to DefaultAzureCredential.
      -mici, --managed-identity-client-id <managed-identity-client-id>                               The client id of a user assigned ManagedIdentity.
      -miri, --managed-identity-resource-id <managed-identity-resource-id>                           The resource id of a user assigned ManagedIdentity.
      -an, --application-name <application-name>                                                     Application name (ClickOnce).
      -d, --description <description>                                                                Description of the signing certificate.
      -u, --description-url <description-url>                                                        Description URL of the signing certificate.
      -b, --base-directory <base-directory>                                                          Base directory for files.  Overrides the current working directory. [default: D:\]
      -o, --output <output>                                                                          Output file or directory. If omitted, input files will be overwritten.
      -pn, --publisher-name <publisher-name>                                                         Publisher name (ClickOnce).
      -fl, --file-list <file-list>                                                                   Path to file containing paths of files to sign or to exclude from signing.
      -rc, --recurse-containers                                                                      Sign container contents. [default: True]
      -fd, --file-digest <file-digest>                                                               Digest algorithm to hash files with. Allowed values are 'sha256', 'sha384', and 'sha512'. [default: SHA256]
      -t, --timestamp-url <timestamp-url>                                                            RFC 3161 timestamp server URL. [default: http://timestamp.acs.microsoft.com/]
      -td, --timestamp-digest <timestamp-digest>                                                     Digest algorithm for the RFC 3161 timestamp server. Allowed values are sha256, sha384, and sha512. [default: SHA256]
      -m, --max-concurrency <max-concurrency>                                                        Maximum concurrency. [default: 4]
      -v, --verbosity <Critical|Debug|Error|Information|None|Trace|Warning>                          Sets the verbosity level. Allowed values are 'none', 'critical', 'error', 'warning', 'information', 'debug', and 'trace'.
                                                                                                     [default: Warning]
      -?, -h, --help                                                                                 Show help and usage information

  • I ended up with the following command

    & ./sign code trusted-signing "D:\ClickOncePublishDirectory\ABCApp.vsto" -tse https://weu.codesigning.azure.net -tsa ABCCodeSigning -tscp ABCCodeSigningCertificate -an ABCApp -pn ABC -t "http://timestamp.acs.microsoft.com" -v debug 

  • I had problems signing the ClickOnce initially due to having quite a few older versions under the "Application Files" directory, once I removed the older versions and only had the lastest build it worked fine

I  hope this helps other people that may be facing a similar task.



Saturday, November 14, 2020

Why does the medical community in general not seem more interested in early detection?

Background

For a number of years I have been investigating and working on how to use IT/IOT/mobile devices to reduce the amount of time that passes, from when a frail/elderly or person with a chronic condition becomes ill, to the appropriate reaction/treatment can be started.
This process has almost in all cases historically been a reactive process. The person in question needed to become objectively visibly ill before someone would/could react. The history of medical diagnosis is a long and interesting one and have varied over time historically, but in general they have moved from rather crude to more and more sophisticated. Even if this has improved greatly it is still a reactive process. 


"Things that take time"

  • The patient's condition starts exhibiting itself or worsens
  • The patient's condition has to be perceived as "bad enough" to cross the patient's "do I call the medical services"-threshold
  • Physical transport time to the health care professional
  • Potential waiting times at health care professional (waiting time at the emergency room or opening hours of a private physician )
  • The health care professional may not have prior knowledge of the patient's condition and may misdiagnose it
  • Tests are done
  • Treatments are started

Depending on many factors like the

  • the patient's condition
  • the patient's "do I call the medical services"-threshold
  • the patient's physical location in relation to the relevant health care provider
the time from the condition starts exhibiting itself or worsens to the appropriate treatment has begun, can be from a few hours (at probably very best in non life threatening conditions) to several days.

This is not optimal use of important time and this can cause the health condition of the patient to deteriorate to a lesser or larger degree.

An example: COPD
( https://en.wikipedia.org/wiki/Chronic_obstructive_pulmonary_disease )


Since
2013 we have been working with a list of objective rules 

  • some rules use fixed ranges for a given value and if you are outside of this that will trigger a yellow or a red color (as opposed to a green color - where things are as good as they can be given the conditions).
  • some uses percentages deviation from a baseline
These rules have been set fairly conservative, and then based on remote monitoring we get frequent data from the patient. This rule algorithm has more or less eliminated most of "things that take time". 

It is still a reactive process in that until the exacerbation manifests itself in a measurable way like colored sputum ( spit / mucus ) or by the lung function declining. But these are still in many cases measureable days, if not up to a week, before the patient's condition worsens to the degree that they are admitted to hospital. This early detection often makes the medical corrections needed, significantly smaller.

If this is paired with a nurse with a specialty in COPD and a medical doctor that is specialising in COPD  as a backup that monitors the patients' values and receive alarms when the situation starts to decline. This paired with emergency medication located with the patients' then in the majority of the cases you can preempt patients being admitted to the hospital.


A question 


People in general are really good at using step counters, Apple watches and other health related devices for exercise purposes and other reasons. Why have the medical community and the hospitals not pushed for a greater adoption of remote patient monitoring in general?

Wednesday, February 26, 2014

WebRTC - Current state of afairs - Native App vs HTML/Web - iOS / Android / MacOS / Windows

I received an email with an inquiry about how to approach WebRTC development in an app, and this is my experiences based on some of the functionality that we have implemented.

Depending on what your requirements are the easiest way is to do web only, but then you are currently ruling out iOS support.

In Google Chrome it works on Windows, Linux, MacOS and Android.  We have tested cross browser on Windows and it works between Chrome, Firefox (and Opera).

It is important to note that this is the full Chrome app/application not the development component "webview". I have not been able to find a webview that supports webrtc, and the new one in KitKat (Android 4.4) does not either (even though it is built on Chrome rather than the original native Android browser)

Chrome for Android supports a few features which aren't enabled in the WebView, including:
  • WebGL 3D canvas
  • WebRTC
  • WebAudio
  • Fullscreen API
  • Form validation
(https://developers.google.com/chrome/mobile/docs/webview/overview)

So it is possible to create a web-based WebRTC application that will work across Android, Windows, Linux and MacOS and at least on Windows it works across Chrome, Firefox and Opera.


IE is currently no go, although

Some people also report that IE supports WebRTC when using http://www.google.com/chromeframe
(http://stackoverflow.com/questions/15724913/which-version-of-microsoft-internet-explorer-support-webrtc)

It works pretty well across both fixed network connections like ADSL/ADSL, 3G/ADSL, 3G/3G networks but occasionally you run into some paranoid system admin that has the firewall locked down so tight that neither incoming or outgoing connections work and then you will face problems. This can occur at a higher level (i.e. ISP). ICE, STUN and TURN can only do so much.



What about native app development and support?

My thoughts on how to do this can be found at the following blog posting http://kenneththorman.blogspot.dk/2014/01/goal-cross-platform-android-ios-webrtc.html

I/we am working on both an Android and iOS solution and it will be developed on Github and everyone is free to contribute. Currently we am facing some problems on the Android version as can be seen here. The iOS version is a bit technically harder than the Android version due to Google/the WebRTC team already doing most of the hard work in their Java demo app.

So all in all web is the most doable way at the moment, but there are people that have this working on iOS using objective C and on Android using native Java.
http://ninjanetic.com/how-to-get-started-with-webrtc-and-ios-without-wasting-10-hours-of-your-life/

http://kenneththorman.blogspot.dk/2014/01/webrtc-app-c-xamarin-part-1-building.html

Wanting a cross device code base and thus creating it in C#/MVVMCross makes it harder, since you have to take mono/native (dalvik/iOS) communication into account, and may run into restrictions in the Xamarin supported API for low level native API's.


There are not many people on the internet that seem to have the skills to pull this off especially if you are aiming for both across web and native.

The skillset to pull this off involves one or more of the following depending on the signaling system you end up using and your requirements to functionality


All in all the complexity still shines through the WebRTC proposal, not because it is bad, but because it is such a diverse runtime enviroment - and it is such an amazingly complex topic. On top of this, the users perception is but this is just like a phone call this shold just work 100% of the cases. They do not see or understand that there might be a difference calling from the same device if you move from WIFI to 3G network. This is plumbing and it should just work - always - on all devices - everywhere. Auch!!! That is a pretty high expectation to fulfill.


This is still bleeding edge, I am not even sure this is out of alpha/beta stages yet - at least officially, so there are reference  implementation (Chromium i.e. Google Chrome) bugs that might bite you.


This is a super exciting area, but at the moment web seems to me the most approachable.


Another option is to buy one of the cloud based offerings that have both Java and iOS components that make this easier. They are shielding you from some of this complexity, but comes at a cost.

Neither web or native is without pitfalls, but certainly doable, if you have the skills.

Wednesday, January 22, 2014

Xamarin / Mono: Android adb tracing / debug logging

adb shell setprop debug.checkjni 1
adb shell setprop debug.mono.env MONO_LOG_LEVEL=info
adb shell setprop debug.mono.log gref,gc


And if none of the above helps:

adb shell setprop debug.mono.trace N:Your.App.Namespace

Undoing

adb shell setprop debug.checkjni 0
adb shell setprop debug.mono.env ''
adb shell setprop debug.mono.log ''
adb shell setprop debug.mono.trace ''

Monday, January 20, 2014

Goal: Cross platform (Android / iOS) WebRTC app using MVVMCross

I am attempting to port 2 Java apps that are part of the WebRTC code to Mono/C#. 
The code is available on GitHub, it is still a fairly early version but it builds (there are some problems during run-time).

Project goals
  1. make an opensource viable implementation of WebRTC for use in Mono / Xamarin / .NET on Android and iOS
  2. make an implementation of a WebRTC UIView/Activity that can be re-used across Android and iOS with as little as possible modifications
  3. be able to share as much code as possible between the different platforms by utilizing PCL (portable class libraries) and MVVMCross
  4. all possible UI logic moved to a shared portable class library using MVVMCross ViewModels
  5. move the WebRTC signaling behind to an interface located in a portable class library with an reference implementation, allowing people to reuse signaling code across platforms, while still implementing their own.
  6. in time write a new similar native interface as the JNI for Java but without JNI and directly targeting being invoked using P/Invoke / Mono / .NET allowing C#/Mono to directly interact with the C/C++ WebRTC library (see related question at Mono-JNI-Performance)
At least one of these goals are outside my comfort zone (#6), and all of them are currently pending, I believe #1 is well in progress.

Currently I am still in the progress of getting the actual WebRTC meeting working. I believe I have a problem in the converted signaling code, which I am trying to track down.



GitHub repositories
appspotdemo-mono (this is the repository that I am focusing my immediate attention on - but I have been short on time lately)

Background for the webrtc-app-mono repository:
When I built the native .so (http://kenneththorman.blogspot.dk/2014/01/webrtc-app-c-xamarin-part-1-building.html) it also build a full android apk that you can install and test. That was the app I already tried on my devices which worked the same way as apprtc.appspot.com. That that was the code I was actually looking for, and as embarrassing it is to admit it - the repo https://github.com/kenneththorman/webrtc-app-mono was me porting the "wrong" Java app code. When I found out and figured out how to work with JNI (mainly through using the Java Binding Library) I went looking in the official WebRTC code again to find the app that I tested in the Java version. This is the repo at https://github.com/kenneththorman/appspotdemo-mono. So basically having 2 repositories are proof of me not being familiar with the official WebRTC code base and not really knowing what code is which.


Any help or suggestions are welcome, please feel free to fork, create issues or communicate here and I will do my best to answer.



References

Thursday, January 09, 2014

rfc5766-turn-server one liner install on CentOS

cd /usr/src;wget https://rfc5766-turn-server.googlecode.com/files/turnserver-3.2.1.4.tar.gz; tar -xf turnserver-3.2.1.4.tar.gz; cd turnserver-3.2.1.4; ./configure; make; make test; service turnserver stop; make install; service turnserver start

Thursday, January 02, 2014

Question: Mono.Android performance: C# -> JNI wrapper -> native lib vs. C# -> Managed wrapper -> native?

I have currently ported a Java app to Mono.Android. The Java app uses a native library.

The full walk through can be read here: WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository


Basically I have some Java code that looks like this

public native int GetVideoEngine();
Initially in my C# equivalent app I tried to use DllImport
[DllImport("libwebrtc-video-demo-jni.so")]
public static extern int Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_GetVideoEngine();
(which did not work), then I ended up wrapping the jars as Java Binding Libraries which in turn JNI'ed the needed classes, so I instead could call the generated JNI wrappers
static Delegate cb_GetVideoEngine;
#pragma warning disable 0169
static Delegate GetGetVideoEngineHandler ()
{
    if (cb_GetVideoEngine == null)
        cb_GetVideoEngine = JNINativeWrapper.CreateDelegate ((Func<IntPtr, IntPtr, int>) n_GetVideoEngine);
    return cb_GetVideoEngine;
}

static int n_GetVideoEngine (IntPtr jnienv, IntPtr native__this)
{
    global::Org.Webrtc.Videoengineapp.ViEAndroidJavaAPI __this = global::Java.Lang.Object.GetObject<global::Org.Webrtc.Videoengineapp.ViEAndroidJavaAPI> (jnienv, native__this, JniHandleOwnership.DoNotTransfer);
    return __this.VideoEngine;
}
#pragma warning restore 0169

static IntPtr id_GetVideoEngine;
public virtual int VideoEngine {
    // Metadata.xml XPath method reference: path="/api/package[@name='org.webrtc.videoengineapp']/class[@name='ViEAndroidJavaAPI']/method[@name='GetVideoEngine' and count(parameter)=0]"
    [Register ("GetVideoEngine", "()I", "GetGetVideoEngineHandler")]
    get {
        if (id_GetVideoEngine == IntPtr.Zero)
            id_GetVideoEngine = JNIEnv.GetMethodID (class_ref, "GetVideoEngine", "()I");

        if (GetType () == ThresholdType)
            return JNIEnv.CallIntMethod  (Handle, id_GetVideoEngine);
        else
            return JNIEnv.CallNonvirtualIntMethod  (Handle, ThresholdClass, id_GetVideoEngine);
    }
}

My question is related to performance:

How much slower is the C# -> JavaBindingLibrary (JAVA/JNI) -> C/C++ JNI Wrapper -> C native library than if I did unwrapped the C/C++ JNI Wrapper and rewrapped it with something like mono / cxxi or similar direct managed or manually wrote a direct callable wrapper?