Saturday, November 14, 2020

Why does the medical community in general not seem more interested in early detection?

Background

For a number of years I have been investigating and working on how to use IT/IOT/mobile devices to reduce the amount of time that passes, from when a frail/elderly or person with a chronic condition becomes ill, to the appropriate reaction/treatment can be started.
This process has almost in all cases historically been a reactive process. The person in question needed to become objectively visibly ill before someone would/could react. The history of medical diagnosis is a long and interesting one and have varied over time historically, but in general they have moved from rather crude to more and more sophisticated. Even if this has improved greatly it is still a reactive process. 


"Things that take time"

  • The patient's condition starts exhibiting itself or worsens
  • The patient's condition has to be perceived as "bad enough" to cross the patient's "do I call the medical services"-threshold
  • Physical transport time to the health care professional
  • Potential waiting times at health care professional (waiting time at the emergency room or opening hours of a private physician )
  • The health care professional may not have prior knowledge of the patient's condition and may misdiagnose it
  • Tests are done
  • Treatments are started

Depending on many factors like the

  • the patient's condition
  • the patient's "do I call the medical services"-threshold
  • the patient's physical location in relation to the relevant health care provider
the time from the condition starts exhibiting itself or worsens to the appropriate treatment has begun, can be from a few hours (at probably very best in non life threatening conditions) to several days.

This is not optimal use of important time and this can cause the health condition of the patient to deteriorate to a lesser or larger degree.

An example: COPD
( https://en.wikipedia.org/wiki/Chronic_obstructive_pulmonary_disease )


Since
2013 we have been working with a list of objective rules 

  • some rules use fixed ranges for a given value and if you are outside of this that will trigger a yellow or a red color (as opposed to a green color - where things are as good as they can be given the conditions).
  • some uses percentages deviation from a baseline
These rules have been set fairly conservative, and then based on remote monitoring we get frequent data from the patient. This rule algorithm has more or less eliminated most of "things that take time". 

It is still a reactive process in that until the exacerbation manifests itself in a measurable way like colored sputum ( spit / mucus ) or by the lung function declining. But these are still in many cases measureable days, if not up to a week, before the patient's condition worsens to the degree that they are admitted to hospital. This early detection often makes the medical corrections needed, significantly smaller.

If this is paired with a nurse with a specialty in COPD and a medical doctor that is specialising in COPD  as a backup that monitors the patients' values and receive alarms when the situation starts to decline. This paired with emergency medication located with the patients' then in the majority of the cases you can preempt patients being admitted to the hospital.


A question 


People in general are really good at using step counters, Apple watches and other health related devices for exercise purposes and other reasons. Why have the medical community and the hospitals not pushed for a greater adoption of remote patient monitoring in general?

Wednesday, February 26, 2014

WebRTC - Current state of afairs - Native App vs HTML/Web - iOS / Android / MacOS / Windows

I received an email with an inquiry about how to approach WebRTC development in an app, and this is my experiences based on some of the functionality that we have implemented.

Depending on what your requirements are the easiest way is to do web only, but then you are currently ruling out iOS support.

In Google Chrome it works on Windows, Linux, MacOS and Android.  We have tested cross browser on Windows and it works between Chrome, Firefox (and Opera).

It is important to note that this is the full Chrome app/application not the development component "webview". I have not been able to find a webview that supports webrtc, and the new one in KitKat (Android 4.4) does not either (even though it is built on Chrome rather than the original native Android browser)

Chrome for Android supports a few features which aren't enabled in the WebView, including:
  • WebGL 3D canvas
  • WebRTC
  • WebAudio
  • Fullscreen API
  • Form validation
(https://developers.google.com/chrome/mobile/docs/webview/overview)

So it is possible to create a web-based WebRTC application that will work across Android, Windows, Linux and MacOS and at least on Windows it works across Chrome, Firefox and Opera.


IE is currently no go, although

Some people also report that IE supports WebRTC when using http://www.google.com/chromeframe
(http://stackoverflow.com/questions/15724913/which-version-of-microsoft-internet-explorer-support-webrtc)

It works pretty well across both fixed network connections like ADSL/ADSL, 3G/ADSL, 3G/3G networks but occasionally you run into some paranoid system admin that has the firewall locked down so tight that neither incoming or outgoing connections work and then you will face problems. This can occur at a higher level (i.e. ISP). ICE, STUN and TURN can only do so much.



What about native app development and support?

My thoughts on how to do this can be found at the following blog posting http://kenneththorman.blogspot.dk/2014/01/goal-cross-platform-android-ios-webrtc.html

I/we am working on both an Android and iOS solution and it will be developed on Github and everyone is free to contribute. Currently we am facing some problems on the Android version as can be seen here. The iOS version is a bit technically harder than the Android version due to Google/the WebRTC team already doing most of the hard work in their Java demo app.

So all in all web is the most doable way at the moment, but there are people that have this working on iOS using objective C and on Android using native Java.
http://ninjanetic.com/how-to-get-started-with-webrtc-and-ios-without-wasting-10-hours-of-your-life/

http://kenneththorman.blogspot.dk/2014/01/webrtc-app-c-xamarin-part-1-building.html

Wanting a cross device code base and thus creating it in C#/MVVMCross makes it harder, since you have to take mono/native (dalvik/iOS) communication into account, and may run into restrictions in the Xamarin supported API for low level native API's.


There are not many people on the internet that seem to have the skills to pull this off especially if you are aiming for both across web and native.

The skillset to pull this off involves one or more of the following depending on the signaling system you end up using and your requirements to functionality


All in all the complexity still shines through the WebRTC proposal, not because it is bad, but because it is such a diverse runtime enviroment - and it is such an amazingly complex topic. On top of this, the users perception is but this is just like a phone call this shold just work 100% of the cases. They do not see or understand that there might be a difference calling from the same device if you move from WIFI to 3G network. This is plumbing and it should just work - always - on all devices - everywhere. Auch!!! That is a pretty high expectation to fulfill.


This is still bleeding edge, I am not even sure this is out of alpha/beta stages yet - at least officially, so there are reference  implementation (Chromium i.e. Google Chrome) bugs that might bite you.


This is a super exciting area, but at the moment web seems to me the most approachable.


Another option is to buy one of the cloud based offerings that have both Java and iOS components that make this easier. They are shielding you from some of this complexity, but comes at a cost.

Neither web or native is without pitfalls, but certainly doable, if you have the skills.

Wednesday, January 22, 2014

Xamarin / Mono: Android adb tracing / debug logging

adb shell setprop debug.checkjni 1
adb shell setprop debug.mono.env MONO_LOG_LEVEL=info
adb shell setprop debug.mono.log gref,gc


And if none of the above helps:

adb shell setprop debug.mono.trace N:Your.App.Namespace

Undoing

adb shell setprop debug.checkjni 0
adb shell setprop debug.mono.env ''
adb shell setprop debug.mono.log ''
adb shell setprop debug.mono.trace ''

Monday, January 20, 2014

Goal: Cross platform (Android / iOS) WebRTC app using MVVMCross

I am attempting to port 2 Java apps that are part of the WebRTC code to Mono/C#. 
The code is available on GitHub, it is still a fairly early version but it builds (there are some problems during run-time).

Project goals
  1. make an opensource viable implementation of WebRTC for use in Mono / Xamarin / .NET on Android and iOS
  2. make an implementation of a WebRTC UIView/Activity that can be re-used across Android and iOS with as little as possible modifications
  3. be able to share as much code as possible between the different platforms by utilizing PCL (portable class libraries) and MVVMCross
  4. all possible UI logic moved to a shared portable class library using MVVMCross ViewModels
  5. move the WebRTC signaling behind to an interface located in a portable class library with an reference implementation, allowing people to reuse signaling code across platforms, while still implementing their own.
  6. in time write a new similar native interface as the JNI for Java but without JNI and directly targeting being invoked using P/Invoke / Mono / .NET allowing C#/Mono to directly interact with the C/C++ WebRTC library (see related question at Mono-JNI-Performance)
At least one of these goals are outside my comfort zone (#6), and all of them are currently pending, I believe #1 is well in progress.

Currently I am still in the progress of getting the actual WebRTC meeting working. I believe I have a problem in the converted signaling code, which I am trying to track down.



GitHub repositories
appspotdemo-mono (this is the repository that I am focusing my immediate attention on - but I have been short on time lately)

Background for the webrtc-app-mono repository:
When I built the native .so (http://kenneththorman.blogspot.dk/2014/01/webrtc-app-c-xamarin-part-1-building.html) it also build a full android apk that you can install and test. That was the app I already tried on my devices which worked the same way as apprtc.appspot.com. That that was the code I was actually looking for, and as embarrassing it is to admit it - the repo https://github.com/kenneththorman/webrtc-app-mono was me porting the "wrong" Java app code. When I found out and figured out how to work with JNI (mainly through using the Java Binding Library) I went looking in the official WebRTC code again to find the app that I tested in the Java version. This is the repo at https://github.com/kenneththorman/appspotdemo-mono. So basically having 2 repositories are proof of me not being familiar with the official WebRTC code base and not really knowing what code is which.


Any help or suggestions are welcome, please feel free to fork, create issues or communicate here and I will do my best to answer.



References

Thursday, January 09, 2014

rfc5766-turn-server one liner install on CentOS

cd /usr/src;wget https://rfc5766-turn-server.googlecode.com/files/turnserver-3.2.1.4.tar.gz; tar -xf turnserver-3.2.1.4.tar.gz; cd turnserver-3.2.1.4; ./configure; make; make test; service turnserver stop; make install; service turnserver start

Thursday, January 02, 2014

Question: Mono.Android performance: C# -> JNI wrapper -> native lib vs. C# -> Managed wrapper -> native?

I have currently ported a Java app to Mono.Android. The Java app uses a native library.

The full walk through can be read here: WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository


Basically I have some Java code that looks like this

public native int GetVideoEngine();
Initially in my C# equivalent app I tried to use DllImport
[DllImport("libwebrtc-video-demo-jni.so")]
public static extern int Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_GetVideoEngine();
(which did not work), then I ended up wrapping the jars as Java Binding Libraries which in turn JNI'ed the needed classes, so I instead could call the generated JNI wrappers
static Delegate cb_GetVideoEngine;
#pragma warning disable 0169
static Delegate GetGetVideoEngineHandler ()
{
    if (cb_GetVideoEngine == null)
        cb_GetVideoEngine = JNINativeWrapper.CreateDelegate ((Func<IntPtr, IntPtr, int>) n_GetVideoEngine);
    return cb_GetVideoEngine;
}

static int n_GetVideoEngine (IntPtr jnienv, IntPtr native__this)
{
    global::Org.Webrtc.Videoengineapp.ViEAndroidJavaAPI __this = global::Java.Lang.Object.GetObject<global::Org.Webrtc.Videoengineapp.ViEAndroidJavaAPI> (jnienv, native__this, JniHandleOwnership.DoNotTransfer);
    return __this.VideoEngine;
}
#pragma warning restore 0169

static IntPtr id_GetVideoEngine;
public virtual int VideoEngine {
    // Metadata.xml XPath method reference: path="/api/package[@name='org.webrtc.videoengineapp']/class[@name='ViEAndroidJavaAPI']/method[@name='GetVideoEngine' and count(parameter)=0]"
    [Register ("GetVideoEngine", "()I", "GetGetVideoEngineHandler")]
    get {
        if (id_GetVideoEngine == IntPtr.Zero)
            id_GetVideoEngine = JNIEnv.GetMethodID (class_ref, "GetVideoEngine", "()I");

        if (GetType () == ThresholdType)
            return JNIEnv.CallIntMethod  (Handle, id_GetVideoEngine);
        else
            return JNIEnv.CallNonvirtualIntMethod  (Handle, ThresholdClass, id_GetVideoEngine);
    }
}

My question is related to performance:

How much slower is the C# -> JavaBindingLibrary (JAVA/JNI) -> C/C++ JNI Wrapper -> C native library than if I did unwrapped the C/C++ JNI Wrapper and rewrapped it with something like mono / cxxi or similar direct managed or manually wrote a direct callable wrapper?

Wednesday, January 01, 2014

WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository

This posting concludes a series of in all 4 blog posting where this being the 4th.

  1. WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library
  2. WebRTC app - C# / Xamarin - Part #2 - Attempt #1 - failure to using a JNI .so file directly from C# / Mono
  3. WebRTC app - C# / Xamarin - Part #2 - Attempt #2 - success using a JNI .so file from C# / Mono
  4. WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository
And finally the associated GitHub repository https://github.com/kenneththorman/webrtc-app-mono

WebRTC app - C# / Xamarin - Part #2 - Attempt #2 - success using a JNI .so file from C# / Mono

This is a post in a series of postings

  1. WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library
  2. WebRTC app - C# / Xamarin - Part #2 - Attempt #1 - failure to using a JNI .so file directly from C# / Mono
  3. WebRTC app - C# / Xamarin - Part #2 - Attempt #2 - success using a JNI .so file from C# / Mono
  4. WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository
And finally the associated GitHub repository https://github.com/kenneththorman/webrtc-app-mono

In my previous posting WebRTC app - C# / Xamarin - Part #2 - Attempt #1 - failure to using a JNI .so file directly from C# / Mono I wrote
This is pushing me in a direction that I initially hoped I could avoid (mainly due to my limited knowledge in the area), JNI.
There were a few reasons that I did not prefer to use a Java Binding Library in the solution. 
  • I would have an even more mixed source code base (C# calling jar/Java which is wrapping C/C++), I would have preferred to keep it at C# wrapping C/C++.
  • The upstream build process is packaging some of the compiled java classes that I need into jar files, but not all of them so now I manually need to add a build step to the build process.
  • I am not familiar with JNI

If we look at the files that are generated during the build process explained in this posting WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library we will see that the jars that we need to build the test android app supplied with the project are located in

~/WebRTCDemo/trunk/webrtc/video_engine/test/android/libs/

namely

audio_device_module_java.jar
video_capture_module_java.jar
video_render_module_java.jar

For each of these jars we need to create a Java Bindings Library that is using the relevant  jar as an input jar (actually 2 of the jars can go into the same Java Binding Library - video_capture_module_java.jar and video_render_module_java.jar since they share the same Java package/namespace). 


(It might be possible to add them all in one Java Binding Library but I encountered problems due to Visual Studios project setting of Default Namespace, which basically was affecting the namespace of the Java classes that I needed to invoke from C#. Since each of the jars contained classes from a different Java package, the easy work around was to add a Java Binding Library for each of the jars and make sure that the Visual Studio Default namespace matched the Java package name.)

Looking at the existing Java WebRTC demo app available in the folder

~/WebRTCDemo/trunk/webrtc/video_engine/test/android/src/org/webrtc/videoengineapp

you will find

IViEAndroidCallback.java
ViEAndroidJavaAPI.java
WebRTCDemo.java

The main file is called WebRTCDemo.java and contains code like the following

...
import org.webrtc.videoengine.ViERenderer;

import java.io.File;
import java.io.IOException;
import java.net.InetAddress;
import java.net.NetworkInterface;
import java.net.SocketException;
import java.util.Enumeration;

public class WebRTCDemo extends TabActivity implements IViEAndroidCallback,
                                                       View.OnClickListener,
                                                       OnItemSelectedListener {
    private ViEAndroidJavaAPI vieAndroidAPI = null;

    // remote renderer
    private SurfaceView remoteSurfaceView = null;

    // local renderer and camera
    private SurfaceView svLocal = null;

    // channel number
    private int channel = -1;
    private int cameraId;
    private int voiceChannel = -1;
...



By looking at the code I could see there would be some problems, since the main activity is referencing 2 classes that are defined and located side by side with the main activity IViEAndroidCallback.java and ViEAndroidJavaAPI.java. In other words these are not available in a jar and especially the ViEAndroidJavaAPI.java ois the wrapper for the JNI native library so we cannot do this directly from C# according to Native library integration (posting #2).

In the beginning of this posting I outlined 3 reasons I preferred to avoid using JNI and Java Binding Libraries, one of them was
  •  The upstream build process is packaging some of the compiled java classes that I need into jar files, but not all of them so now I manually need to add a build step to the build process.
Anyhow this was fairly easy to remedy by manually creating a new jar containing the 2 files that was needed and then adding this new jar to a new Java Binding Library.
 
XXX@ubuntu:~/WebRTCDemo/trunk/webrtc/video_engine/test/android/bin/classes$
jar cvf ViEAndroidJavaAPI.jar
org/webrtc/videoengineapp/IViEAndroidCallback.class
org/webrtc/videoengineapp/ViEAndroidJavaAPI.class
added manifest
adding: org/webrtc/videoengineapp/IViEAndroidCallback.class(in = 218)
(out= 173)(deflated 20%)
adding: org/webrtc/videoengineapp/ViEAndroidJavaAPI.class(in = 2845)
(out= 1303)(deflated 54%)

Now I was able to use the native library without any exceptions occurring.

WebRTC app - C# / Xamarin - Part #2 - Attempt #1 - failure to using a JNI .so file directly from C# / Mono

This is a post in a series of postings

  1. WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library
  2. WebRTC app - C# / Xamarin - Part #2 - Attempt #1 - failure to using a JNI .so file directly from C# / Mono
  3. WebRTC app - C# / Xamarin - Part #2 - Attempt #2 - success using a JNI .so file from C# / Mono
  4. WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository
And finally the associated GitHub repository https://github.com/kenneththorman/webrtc-app-mono

In my previous posting WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library I have showed how to build a native library that we need to use to build a WebRTC app on Xamarin / Mono.Droid.

In this posting I will take you through my struggles, subsequent failure and the next posting finally success on how to actually use this JNI native library from Mono.Android.

I started a new solution in Visual Studio 2013 and added new Android Application project. Then according to Xamarin: Using Native Libraries.
I needed to add my .so file to the location  
<project>\lib\armeabi-v7a\libwebrtc-video-demo-jni.so.

The next step that I tried was to start using DllImport statements.
using System;
using System.Runtime.InteropServices;
using Android.Content;
using Android.Util;
using Encoding = System.Text.Encoding;

namespace WebRtc
{
        public class ViEAndroidJavaAPI
        {

...
                // API Native

                [DllImport("libwebrtc-video-demo-jni.so")]
                private static extern bool NativeInit(Context context);

                // Video Engine API
                // Initialization and Termination functions
                [DllImport("libwebrtc-video-demo-jni.so")]
                public static extern int GetVideoEngine();

                [DllImport("libwebrtc-video-demo-jni.so")]
                public static extern int Init(bool enableTrace);

                [DllImport("libwebrtc-video-demo-jni.so")]
                public static extern int Terminate();
...


Trying to run my project yielded some EntryPointNotFoundException in the error log. After a bit of Google-ing I found that the method names as seen from Mono are not as you expect instead they contain the full package/class path.

Using the following command on the Ubuntu build machine
~/WebRTCDemo/trunk/webrtc/video_engine/test/android/libs/armeabi-v7a$ arm-linux-androideabi-nm -D libwebrtc-video-demo-jni.so
yielded the following output
00015358 T JNI_OnLoad
00015be4 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_AddRemoteRenderer
00016e3c T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_CreateChannel
00015f14 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_EnableNACK
00015f58 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_EnablePLI
00015e04 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_GetCameraOrientation
00015acc T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_GetCodecs
000153d4 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_GetVideoEngine
000154cc T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_Init
000153d0 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_NativeInit
00015c30 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_RemoveRemoteRenderer
00015fb0 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_SetCallback
00015ea8 T Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_SetExternalMediaCodecDecoderRenderer
...
So changing my code to the following made the EntryPointNotFoundException go away
using System;
using System.Runtime.InteropServices;
using Android.Content;
using Android.Util;
using Encoding = System.Text.Encoding;

namespace WebRtc
{
        public class ViEAndroidJavaAPI
        {

...
                // API Native

                [DllImport("libwebrtc-video-demo-jni.so")]
                private static extern bool Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_NativeInit(Context context);

                // Video Engine API
                // Initialization and Termination functions
                [DllImport("libwebrtc-video-demo-jni.so")]
                public static extern int Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_GetVideoEngine();

                [DllImport("libwebrtc-video-demo-jni.so")]
                public static extern int Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_Init(bool enableTrace);

                [DllImport("libwebrtc-video-demo-jni.so")]
                public static extern int Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_Terminate();

...

Now I was faced with another exception which seemed much nastier.
UNHANDLED EXCEPTION: System.Runtime.InteropServices.MarshalDirectiveException: Type Java.Lang.Object which is passed to unmanaged code must have a StructLayout attribute.

12-21 19:29:04.298 I/MonoDroid(15226): UNHANDLED EXCEPTION: System.Runtime.InteropServices.MarshalDirectiveException: Type Java.Lang.Object which is passed to unmanaged code must have a StructLayout attribute.
12-21 19:29:04.298 I/MonoDroid(15226): at (wrapper managed-to-native) WebRtc.ViEAndroidJavaAPI.Java_org_webrtc_videoengineapp_ViEAndroidJavaAPI_NativeInit (Android.Content.Context) 
12-21 19:29:04.298 I/MonoDroid(15226): at WebRtc.ViEAndroidJavaAPI..ctor (Android.Content.Context) [0x00033] in XXX\WebRtc.Mono.Droid\ViEAndroidJavaAPI.cs:30
12-21 19:29:04.298 I/MonoDroid(15226): at WebRtc.Mono.Droid.WebRTCDemo.startMain () [0x0004b] in XXX\WebRtc.Mono.Droid\WebRTCDemo.cs:533
12-21 19:29:04.298 I/MonoDroid(15226): at WebRtc.Mono.Droid.WebRTCDemo.OnCreate (Android.OS.Bundle) [0x0028e] in XXX\WebRtc.Mono.Droid\WebRTCDemo.cs:313
12-21 19:29:04.298 I/MonoDroid(15226): at Android.App.Activity.n_OnCreate_Landroid_os_Bundle_ (intptr,intptr,intptr) [0x00011] in /Users/builder/data/lanes/monodroid-mlion-monodroid-4.10.1-branch/d23a19bf/source/monodroid/src/Mono.Android/platforms/android-17/src/generated/Android.App.Activity.cs:2119
12-21 19:29:04.298 I/MonoDroid(15226): at (wrapper dynamic-method) object.705dc6ba-9c58-4bcd-a8a2-f12584a9175f (intptr,intptr,intptr)

Finally after digging I found this posting Native library integration which basically state
you cannot sanely use P/Invoke to invoke the native method. You must instead use JNI to invoke the Java-side native method.
Basically because this is Java native C/C++ interface we are to invoke, you cannot do this like normal non JNI wrapped C/C++ methods.


This is pushing me in a direction that I initially hoped I could avoid (mainly due to my limited knowledge in the area), JNI.


Later: I did some (quite a bit) reading, namely I found these links useful:

Interop with Native Libraries
Java Integration Overview
Working With JNI

In the next posting in this series I manage to invoke the native methods.

WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library

This is a post in a series of postings

  1. WebRTC app - C# / Xamarin - Part #1 - Building platform native webrtc library
  2. WebRTC app - C# / Xamarin - Part #2 - Attempt #1 - failure to using a JNI .so file directly from C# / Mono
  3. WebRTC app - C# / Xamarin - Part #2 - Attempt #2 - success using a JNI .so file from C# / Mono
  4. WebRTC app - C# / Xamarin - (C# - JNI - C/C++) - Summary and GitHub repository
And finally the associated GitHub repository https://github.com/kenneththorman/webrtc-app-mono

I wish to build a cross platform app that supports WebRTC (real time communication - wikipedia article here). I would like to use Xamarin to achieve some level of code reuse between iOS and Android. There are several areas of the application that can use common code like the:
This series of blog postings will document my attempt at implementing the Android initial app (Mono.Android) and then I will attempt to move this to iOS (monotouch). Unlike some of my other postings this is a documentation project during the attempt to reach that goal.

Lets get started. 
Now building webrtc and all the associated libraries is not for the faint of heart, but luckily there are some pretty nice build tools available that does the job nicely. For both the Android and later the iOS edition we will need a C/C++ native compiled library that does all the heavy lifting with regards to audio/video rendering, decoding and encoding. So my first goal was to build a Android compatible library that I could include in my solution.

Here are some good links that got me started
http://www.webrtc.org/reference/getting-started

and then I found this little gem at Ryazantsev's blog which basically walks you through the process step by step. I already had a Ubuntu 12.04 LTS virtual machine installed and configured so the below is the console commands run on my machine. I use some slightly modified commands compared to Ryazantsev's blog. Thanks to Ryazantsev for posting this great post.


##Installing JAVA (first install default JAVA)
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update && sudo apt-get install oracle-jdk7-installer
chmod a+x jdk-6u45-linux-x64.bin
./jdk-6u45-linux-x64.bin
mkdir /usr/lib/jvm
mv jdk1.6.0_45 /usr/lib/jvm/jdk1.6.0_45

##Update alternatives of java tools
jdir=jdk1.6.0_45
sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/$jdir/bin/javac 1
sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/$jdir/bin/java 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/lib/jvm/$jdir/bin/javaws 1
sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/$jdir/bin/jar 1

##Check alternatives and java version
sudo update-alternatives --config javac
sudo update-alternatives --config java
sudo update-alternatives --config javaws
sudo update-alternatives --config jar
ls -la /etc/alternatives/{java,javac,javaws,jar}
java -version

echo 'export JAVA_HOME=/usr/lib/jvm/jdk1.6.0_45' >> ~/.bashrc
echo 'PATH="$PATH":`pwd`/depot_tools' >> ~/.bashrc
source ~/.bashrc
printenv | grep depot_tools
 
sudo apt-get install git subversion libpulse-dev g++ pkg-config gtk+-2.0 libnss3-dev
git clone https://chromium.googlesource.com/chromium/tools/depot_tools.git

###Building WebRTC Trunk
cd ~
mkdir WebRTCDemo
cd WebRTCDemo/

gclient config https://webrtc.googlecode.com/svn/trunk
gclient sync
echo target_os = [\'android\', \'unix\'] >> .gclient
./trunk/build/install-build-deps.sh  --no-chromeos-fonts
./trunk/build/install-build-deps-android.sh
check  'sudo update-alternatives --config java ' for correct path
gclient sync
cd trunk
source ./build/android/envsetup.sh
gclient runhooks
GYP_GENERATORS=ninja ./build/gyp_chromium --depth=. all.gyp 
ninja -C out/Debug -j10 All

After downloading all the files and building the project it takes up about 4.6GB on my disk in the virtual machine. Most source files in the project will show to be a good reference when we need to start using this in a Xamarin project. The really relevant parts through is the java sources for the Java version of the app (written by the WebRTC authors) as well as the Android compatible .so file.


The compiled .so file we need is available at 
~/WebRTCDemo/trunk/webrtc/video_engine/test/android/libs/armeabi-v7a/libwebrtc-video-demo-jni.so 
The next posting will be attempt #1 in converting a Java app to a c# equivalent