How To Instantiatiate Multiple Beans Dinamically in Spring-Boot Depending on Configuration-Properties

TL;DR

In this mini-HowTo I will show a way, how to instantiate multiple beans dinamically in Spring-Boot, depending on configuration-properties.
We will:

  • write a ApplicationContextInitializer to add the beans to the context, before it is refreshed
  • write a EnvironmentPostProcessor to access the configured configuration sources
  • register the EnvironmentPostProcessor with Spring-Boot

Write an ApplicationContextInitializer

Additionally Beans can be added programatically very easy with the help of an ApplicationContextInitializer:

@AllArgsConstructor
public class MultipleBeansApplicationContextInitializer
    implements
      ApplicationContextInitializer
{
  private final String[] sites;

  @Override
  public void initialize(ConfigurableApplicationContext context)
  {
    ConfigurableListableBeanFactory factory =
        context.getBeanFactory();
    for (String site : sites)
    {
      SiteController controller =
          new SiteController(site, "Descrition of site " + site);
      factory.registerSingleton("/" + site, controller);
    }
  }
}

This simplified example is configured with a list of strings that should be registered as controllers with the DispatcherServlet.
All “sites” are insances of the same controller SiteController, which are instanciated and registered dynamically.

The instances are registered as beans with the method registerSingleton(String name, Object bean)
of a ConfigurableListableBeanFactory that can be accessed through the provided ConfigurableApplicationContext

The array of strings represents the accessed configuration properties in the simplified example.
The array will most probably hold more complex data-structures in a real-world application.

But how do we get access to the configuration-parameters, that are injected in this array here…?

Accessing the Configured Property-Sources

Instantiating and registering the additionally beans is easy.
The real problem is to access the configuration properties in the early plumbing-stage of the application-context, in that our ApplicationContextInitializer runs in:

The initializer cannot be instantiated and autowired by Spring!

The Bad News: In the early stage we are running in, we cannot use autowiring or access any of the other beans that will be instantiated by spring – especially not any of the beans, that are instantiated via @ConfigurationProperties, we are intrested in.

The Good News: We will present a way, how to access initialized instances of all property sources, that will be presented to your app

Write an EnvironmentPostProcessor

If you write an EnvironmentPostProcessor, you will get access to an instance of ConfigurableEnvironment, that contains a complete list of all PropertySource‘s, that are configured for your Spring-Boot-App.

public class MultipleBeansEnvironmentPostProcessor
    implements
      EnvironmentPostProcessor
{
  @Override
  public void postProcessEnvironment(
      ConfigurableEnvironment environment,
      SpringApplication application)
  {
    String sites =
        environment.getRequiredProperty("juplo.sites", String.class);

    application.addInitializers(
        new MultipleBeansApplicationContextInitializer(
            Arrays
                .stream(sites.split(","))
                .map(site -> site.trim())
                .toArray(size -> new String[size])));
  }
}

The Bad News:
Unfortunately, you have to scan all property-sources for the parameters, that you are interested in.
Also, all values are represented as stings in this early startup-phase of the application-context, because Spring’s convenient conversion mechanisms are not available yet.
So, you have to convert any values by yourself and stuff them in more complex data-structures as needed.

The Good News:
The property names are consistently represented in standard Java-Properties-Notation, regardless of the actual type (.properties / .yml) of the property source.

Register the EnvironmentPostProcessor

Finally, you have to register the EnvironmentPostProcessor with your Spring-Boot-App.
This is done in the META-INF/spring.factories:

org.springframework.boot.env.EnvironmentPostProcessor=\
  de.juplo.demos.multiplebeans.MultipleBeansEnvironmentPostProcessor

That’s it, your done!

Source Code

You can find the whole source code in a working mini-application on juplo.de and GitHub:

Other Blog-Posts On The Topic

  • The blog-post Dynamic Beans in Spring shows a way to register beans dynamically, but does not show how to access the configuration. Also, meanwhile another interface was added to spring, that facilitates this approach: BeanDefinitionRegistryPostProcessor
  • Benjamin shows in How To Create Your Own Dynamic Bean Definitions In Spring, how this interface can be applied and how one can access the configuration. But his example only works with plain Spring in a Servlet Container

How To Redirect To Spring Security OAuth2 Behind a Gateway/Proxy — Part 2: Hiding The App Behind A Reverse-Proxy (Aka Gateway)

This post is part of a series of Mini-Howtos, that gather some help, to get you started, when switching from localhost to production with SSL and a reverse-proxy (aka gateway) in front of your app, that forwards the requests to your app that listens on a different name/IP, port and protocol.

In This Series We…

  1. Run the official Spring-Boot-OAuth2-Tutorial as a container in docker
  2. Simulate production by hiding the app behind a gateway (this part)
  3. Show how to debug the oauth2-flow for the whole crap!
  4. Enable SSL on our gateway
  5. Show how to do the same with Facebook, instead of GitHub

I will also give some advice for those of you, who are new to Docker – but just enough to enable you to follow.

This is part 2 of this series, that shows how to run a Spring-Boot OAuth2 App behind a gateway
Part 1 is linked above.

Our Plan: Simulating A Production-Setup

We will simulate a production-setup by adding the domain, that will be used in production – example.com in our case -, as an alias for localhost.

Additionally, we will start an NGINX as reverse-proxy alongside our app and put both containers into a virtual network.
This simulates a real-world secenario, where your app will be running behinde a gateway together with a bunch of other apps and will have to deal with forwarded requests.

Together, this enables you to test the production-setup of your oauth2-provider against a locally running development environment, including the configuration of the finally used URIs and nasty forwarding-errors.

To reach this goal we will have to:

  1. Reconfigure our oauth-provider for the new domain
  2. Add the domain as an alias for localhost
  3. Create a virtual network
  4. Move the app into the created virtual network
  5. Configure and start nginx as gateway in the virtual network

By the way:
Any other server, that can act as reverse proxy, or some real gateway,like Zuul would work as well, but we stick with good old NGINX, to keep it simple.

Switching The Setup Of Your OAuth2-Provider To Production

In our example we are using GitHub as oauth2-provider and example.com as the domain, where the app should be found after the release.
So, we will have to change the Authorization callback URL to
http://example.de/login/oauth2/code/github

O.k., that’s done.

But we haven’t released yet and nothing can be found on the reals server, that hosts example.com
But still, we really would like to test that production-setup to be sure that we configured all bits and pieces correctly!


In order to tackle this chicken-egg-problem, we will fool our locally running browser to belive, that example.com is our local development system.

Setting Up The Alias for example.com

On Linux/Unix this can be simply done by editing /etc/hosts.
You just have to add the domain (example.com) at the end of the line that starts with 127.0.0.1:

127.0.0.1	localhost example.com

Locally running programms – like your browser – will now resolve example.com as 127.0.0.1

Create A Virtual Network With Docker

Next, we have to create a virtual network, where we can put in both containers:

docker network create juplo

Yes, with Docker it is as simple as that.

Docker networks also come with some extra goodies.
Especially one, which is extremly handy for our use-case is: They are enabling automatic name-resolving for the connected containers.
Because of that, we do not need to know the IP-addresses of the participating containers, if we give each connected container a name.

Docker vs. Kubernetes vs. Docker-Compose

We are using Docker here on purpose.
Using Kubernetes just to test / experiment on a DevOp-box would be overkill.
Using Docker-Compose might be an option.
But we want to keep it as simple as possible for now, hence we stick with Docker.
Also, we are just experimenting here.


You might want to switch to Docker-Compose later.
Especially, if you plan to set up an environment, that you will frequently reuse for manual tests or such.

Move The App Into The Virtual Network

To move our app into the virtual network, we have to start it again with the additional parameter --network.
We also want to give it a name this time, by using --name, to be able to contact it by name.


You have to stop and remove the old container from part 1 of this HowTo-series with CTRL-C beforehand, if it is still running – Removing is done automatically, because we specified --rm
:

docker run \
  -d \
  --name app \
  --rm \
  --network juplo \
  juplo/social-logout:0.0.1 \
  --server.use-forward-headers=true \
  --spring.security.oauth2.client.registration.github.client-id=YOUR_GITHUB_ID \
  --spring.security.oauth2.client.registration.github.client-secret=YOUR_GITHUB_SECRET

Summary of the changes in comparison to the statement used in part 1:

  • We added -d to run the container in the background – See tips below…
  • We added --server.use-forward-headers=true, which is needed, because our app is running behind a gateway now – I will explain this in more detail later

  • And: Do not forget the --network juplo,
    which is necessary to put the app in our virtual network juplo, and --name app, which is necessary to enable DNS-resolving.
  • You do not need the port-mapping this time, because we will only talk to our app through the gateway.
    Remember: We are hiding our app behind the gateway!

Some quick tips to Docker-newbies

  • Since we are starting multiple containers, that shall run in parallel, you have to start each command in a separate terminal, because CTRL-C will stop (and in our case remove) the container again.
  • Alternatively, you can add the parameter -d (for daemonize) to start the container in the background.

  • Then, you can look at its output with docker logs -f NAME (safely disruptable with CTRL-C) and stop (and in our case remove) the container with docker stop NAME.
  • If you wonder, which containers are actually running, docker ps is your friend.

Starting the Reverse-Proxy Aka Gateway

Next, we will start NGINX alongside our app and configure it as reverse-proxy:

  1. Create a file proxy.conf with the following content:

    upstream upstream_a {
      server        app:8080;
    }
    
    server {
      listen        80;
      server_name   example.com;
    
      proxy_set_header     X-Real-IP           $remote_addr;
      proxy_set_header     X-Forwarded-For     $proxy_add_x_forwarded_for;
      proxy_set_header     X-Forwarded-Proto   $scheme;
      proxy_set_header     Host                $host;
      proxy_set_header     X-Forwarded-Host    $host;
      proxy_set_header     X-Forwarded-Port    $server_port;
    
      location / {
        proxy_pass  http://upstream_a;
      }
    }
    
    • We define a server, that listens to requests for the host example.com (server_name) on port 80.
    • With the location-directive we tell this server, that all requests shall be handled by the upstream-server upstream_a.
    • This server was defined in the upstream-block at the beginning of the configuration-file to be a forward to app:8080
    • app is simply the name of the container, that is running our oauth2-app – Rembember: the name is resolvable via DNS
    • 8080 is the port, our app listens on in that container.
    • The proxy_set_header-directives are needed by Spring-Boot Security, for dealing correctly with the circumstance, that it is running behind a reverse-proxy.

    In part 3, we will survey the proxy_set_header-directives in more detail.

  2. Start nginx in the virtual network and connect port 80 to localhost:

    docker run \
      --name proxy \
      --rm \
      --network juplo -p 80:80 \
      --volume $(pwd)/proxy.conf:/etc/nginx/conf.d/proxy.conf:ro \
      nginx:1.17
    

    This command has to be executed in the direcotry, where you have created the file proxy.conf.

    • I use NGINX here, because I want to demystify the work of a gateway
      traefik would have been easier to configure in this setup, but it would have disguised, what is going on behind the scene: with NGINX we have to configure all manually, which is more explicitly and hence, more informative
    • We can use port 80 on localhost, since the docker-daemon runs with root-privileges and hence, can use this privileged port – if you do not have another webserver running locally there.
    • $(pwd) resolves to your current working-directory – This is the most convenient way to produce the absolute path to proxy.conf, that is required by --volume to work correclty.
  3. If you have reproduced the receipt exacly, your app should be up and running now.
    That is:

    • Because we set the alias example.com to point at localhost you should now be able to open your app as http://example.com in a locally running browser
    • You then should be able to login/logount without errors
    • If you have configured everything correctly, neither your app nor GitHub should mutter at you during the redirect to GitHub and back to your app

    Whats next… is what can go wrong!

    In this simulated production-setup a lot of stuff can go wrong!
    You may face nearly any problem from configuration-mismatches considering the redirect-URIs to nasty and hidden redirect-issues due to forwarded requests.


    Do not mutter at me…
    Remember: That was the reason, we set up this simulated production-setup in the first place!

    In the next part of this series I will explain some of the most common problems in a production-setup with forwarded requests.
    I will also show, how you can debug the oauth2-flow in your simulated production-setup, to discover and solve these problems

How To Redirect To Spring Security OAuth2 Behind a Gateway/Proxy – Part 1: Running Your App In Docker

Switching From Tutorial-Mode (aka POC) To Production Is Hard

Developing Your first OAuth2-App on localhost with OAuth2 Boot may be easy, …

…but what about running it in real life?

Looking for the real life

This is the first post of a series of Mini-Howtos, that gather some help, to get you started, when switching from localhost to production with SSL and a reverse-proxy (aka gateway) in front of your app, that forwards the requests to your app that listens on a different name/IP, port and protocol.

In This Series We Will…

  1. Start with the fantastic official OAuth2-Tutorial from the Spring-Boot folks – love it! – and run it as a container in docker
  2. Hide that behind a reverse-proxy, like in production – nginx in our case, but could be any pice of software, that can act as a gateway
  3. Show how to debug the oauth2-flow for the whole crap!
  4. Enable SSL for our gateway – because oauth2-providers (like Facebook) are pressing us to do so
  5. Show how to do the same with Facebook, instead of GitHub

I will also give some advice for those of you, who are new to Docker – but just enough to enable you to follow.

This is Part 1 of this series, that shows how to package a Spring-Boot-App as Docker-Image and run it as a container

tut-spring-boot-oauth2/logout

As an example for a simple app, that uses OAuth2 for authentication, we will use the third step of the Spring-Boot OAuth2-Tutorial.

You should work through that tutorial up until that step – called logout -, if you have not done yet.
This will guide you through programming and setting up a simple app, that uses the GitHub-API to authenticate its users.

Especially, it explains, how to create and set up a OAuth2-App on GitHubDo not miss out on that part: You need your own app-ID and -secret and a correctly configured redirect URI.

You should be able to build the app as JAR and start that with the ID/secret of your GitHub-App without changing code or configuration-files as follows:

mvn package
java -jar target/social-logout-0.0.1-SNAPSHOT.jar \
  --spring.security.oauth2.client.registration.github.client-id=YOUR_GITHUB_APP_ID
  --spring.security.oauth2.client.registration.github.client-secret=YOUR_GITHUB_APP_SECRET

If the app is running corectly, you should be able to Login/Logout via http://localhost:8080/

The folks at Spring-Boot are keeping the guide and this repository up-to-date pretty well.
At the date of the writing of this article it is up to date with version 2.2.2.RELEASE of Spring-Boot.

You may as well use any other OAuth2-application here. For example your own POC, if you already have build one that works while running on localhost

Some Short Notes On OAuth2

I will only explain the protocol in very short words here, so that you can understand what goes wrong in case you stumble across one of the many pitfalls, when setting up oauth2.
You can read more about oauth2 elswhere

For authentication, oauth2 redirects the browser of your user to a server of your oauth2-provider.
This server authenticates the user and redirects the browser back to your server, providing additionally information and ressources, that lets your server know that the user was authenticated successfully and enables it to request more information in the name of the user.

Hence, when configuring oath2 one have to:

  1. Provide the URI of the server of your oauth2-provider, the browser will be redirected to for authentication
  2. Tell the server of the oauth2-provider the URL, the browser will be redirected to back after authentication
  3. Also, your app has to provide some identification – a client-ID and -secret – when redirecting to the server of your oauth2-provider, which it has to know

There are a lot more things, which can be configured in oauth2, because the protocol is designed to fit a wide range of use-cases.
But in our case, it usually boils down to the parameters mentioned above.

Considering our combination of spring-security-oauth2 with GitHub this means:

  1. The redirect-URIs of well known oauth2-providers like GitHub are build into the library and do not have to be configured explicitly.
  2. The URI, the provider has to redirect the browser back to after authenticating the user, is predefined by the library as well.

    But as an additional security measure, almost every oauth2-provider requires you, to also specify this redirect-URI in the configuration on the side of the oauth2-provider.

    This is a good and necessary protection against fraud, but at the same time the primary source for missconfiguration:
    If the specified URI in the configuration of your app and on the server of your oauth2-provider does not match, ALL WILL FAIL!
  3. The ID and secret of the client (your GitHub-app) always have to be specified explicitly by hand.

Again, everything can be manually overriden, if needed.
Configuration-keys starting with spring.security.oauth2.client.registration.github are choosing GitHub as the oauth2-provider and trigger a bunch of predifined default-configuration.
If you have set up your own oauth2-provider, you have to configure everything manually.

Running The App Inside Docker

To faciliate the debugging – and because this most probably will be the way you are deploying your app anyway – we will start by building a docker-image from the app

For this, you do not have to change a single character in the example project – all adjustments to the configuration will be done, when the image is started as a container.
Just change to the subdirectory logout of the checked out project and create the following Dockerfile there:

FROM openjdk:8-jre-buster

COPY  target/social-logout-0.0.1-SNAPSHOT.jar /opt/app.jar
EXPOSE 8080
ENTRYPOINT [ "/usr/local/openjdk-8/bin/java", "-jar", "/opt/app.jar" ]
CMD []

This defines a docker-image, that will run the app.

  • The image deduces from openjdk:8-jre-buster, which is an installation of the latest OpenJDK-JDK8 on a Debian-Buster
  • The app will listen on port 8080
  • By default, a container instanciated from this image will automatically start the Java-app
  • The CMD [] overwrites the default from the parent-image with an empty list – this enables us to pass command-line parameters to our spring-boot app which we will need to pass in our configuration

You can build and tag this image with the following commands:

mvn clean package
docker build -t juplo/social-logout:0.0.1 .

This will tag your image as juplo/social-logout:0.0.1 – you obviously will/should use your own tag here, for example: myfancytag

Do not miss out on the flyspeck (.) at the end of the last line!

You can run this new image with the follwing command – and you should do that, to test that everything works as expected:

docker run \
  --rm \
  -p 8080:8080 \
  juplo/social-logout:0.0.1 \
  --spring.security.oauth2.client.registration.github.client-id=YOUR_GITHUB_ID \
  --spring.security.oauth2.client.registration.github.client-secret=YOUR_GITHUB_SECRET
  • --rm removes this test-container automatically, once it is stopped again
  • -p 8080:8080 redirects port 8080 on localhost to the app

Everything after the specification of the image (here: juplo/social-logout:0.0.1) is handed as a command-line parameter to the started Spring-Boot app – That is, why we needed to declare CMD [] in our Dockerfile

We utilize this here to pass the ID and secret of your GitHub-app into the docker container — just like when we started the JAR directly

The app should behave exactly the same now lik in the test above, where we started it directly by calling the JAR.

That means, that you should still be able to login into and logout of your app, if you browse to http://localhost:8080
At least, if you correctly configured http://localhost:8080/login/oauth2/code/github as authorization callback URL in the settings of your OAuth App on GitHub.

Comming Next…

In the next part of this series, we will hide the app behind a proxy and simulate that the setup is running on our real server example.com.

Create Self-Signed Multi-Domain (SAN) Certificates

TL;DR

The SAN-extension is removed during signing, if not respecified explicitly.
To create a private CA with self-signed multi-domain certificats for your development setup, you simply have to:

  1. Run create-ca.sh to generate the root-certificate for your private CA.
  2. Run gencert.sh NAME to generate selfsigned certificates for the CN NAME with an exemplary SAN-extension.

Subject Alternative Name (SAN) And Self-Signed Certificates

Multi-Domain certificates are implemented as a certificate-extension called Subject Alternative Name (SAN).
One can simply specify the additional domains (or IP’s) when creating a certificate.

The following example shows the syntax for the keytool-command, that comes with the JDK and is frequently used by Java-programmers to create certificates:

keytool \
 -keystore test.jks -storepass confidential -keypass confidential \
 -genkey -alias test -validity 365 \
 -dname "CN=test,OU=security,O=juplo,L=Juist,ST=Niedersachsen,C=DE" \
 -ext "SAN=DNS:test,DNS:localhost,IP:127.0.0.1"

If you list the content of the newly created keystore with…

keytool -list -v -keystore test.jks

…you should see a section like the following one:

#1: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: test
  DNSName: localhost
  IPAddress: 127.0.0.1
]

The certificate is also valid for this additionally specified domains and IP’s.

The problem is, that it is not signed and will not be trusted, unless you publicize it explicitly through a truststore.
This is feasible, if you just want to authenticate and encrypt one point-2-point communication.
But if more clients and/or servers have to be authenticated to each other, updating and distributing the truststore will soon become hell.

The common solution in this situation is, to create a private CA, that can sign newly created certificates.
This way, only the root-certificate of that private CA has to be distributed.
Clients, that know the root-certificate of the private CA will automatically trust all certificates, that are signed by that CA.

But unfortunatly, if you sign your certificate, the SAN-extension vanishes: the signed certificate is only valid for the CN.
(One may think, that you just have to specify the export of the SAN-extension into the certificate-signing-request – which is not exported by default – but the SAN will still be lost after signing the extended request…)

This removal of the SAN-extension is not a bug, but a feature.
A CA has to be in control, which domains and IP’s it signes certificates for.
If a client could write arbitrary additional domains in the SAN-extension of his certificate-signing-request, he could fool the CA into signing a certificate for any domain.
Hence, all entries in a SAN-extension are removed by default during signing.

This default behavior is very annoying, if you just want to run your own private CA, to authenticate all your services to each other.

In the following sections, I will walk you through a solution to circumvent this pitfall.
If you just need a working solution for your development setup, you may skip the explanation and just download the scripts, that combine the presented steps.

Recipe To Create A Private CA With Self-Signed Multi-Domain Certificates

Create And Distribute The Root-Certificate Of The CA

We are using openssl to create the root-certificate of our private CA:

openssl req \
  -new -x509 -subj "/C=DE/ST=Niedersachsen/L=Juist/O=juplo/OU=security/CN=Root-CA" \
  -keyout ca-key -out ca-cert -days 365 -passout pass:extraconfidential

This should create two files:

  • ca-cert, the root-certificate of your CA
  • ca-key, the private key of your CA with the password extraconfidential

Be sure to protect ca-key and its password, because anyone who has access to both of them, can sign certificates in the name of your CA!

To distribute the root-certificate, so that your Java-clients can trust all certificates, that are signed by your CA, you have to import the root-certificate into a truststore and make that truststore available to your Java-clients:

keytool \
  -keystore truststore.jks -storepass confidential \
  -import -alias ca-root -file ca-cert -noprompt

Create A Certificate-Signing-Request For Your Certificat

We are reusing the already created certificate here.
If you create a new one, there is no need to specify the SAN-extension, since it will not be exported into the request and this version of the certificate will be overwritten, when the signed certificate is reimported:

keytool \
  -keystore test.jks -storepass confidential \
  -certreq -alias test -file cert-file

This will create the file cert-file, which contains the certificate-signing-request.
This file can be deleted, after the certificate is signed (which is done in the next step).

Sign The Request, Adding The Additional Domains In A SAN-Extension

We use openssl x509 to sign the request:

openssl x509 \
  -req -CA ca-cert -CAkey ca-key -in cert-file -out test.pem \
  -days 365 -CAcreateserial -passin pass:extraconfidential \
  -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1")

This can also be done with openssl ca, which has a slightly different and little bit more complicated API.
openssl ca is ment to manage a real full-blown CA.
But we do not need the extra options and complexity for our simple private CA.

The important part here is all that comes after -extensions SAN.
It specifies the Subject-Alternative-Name-section, that we want to include additionally into the signed certificate.
Because we are in full control of our private CA, we can specify any domains and/or IP’s here, that we want.
The other options are ordinary certificate-signing-stuff, that is already better explained elswhere.

We use a special syntax with the option -extfile, that allows us to specify the contents of a virtual file as part of the command.
You can as well write your SAN-extension into a file and hand over the name of that file here, as it is done usually.
If you want to specify the same SAN-extension in a file, that file would have to contain:

[SAN]
subjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1

Note, that the name that you give the extension on the command-line with -extension SAN has to match the header in the (virtual) file ([SAN]).

As a result of the command, the file test.pem will be created, which contains the signed x509-certificate.
You can disply the contents of that certificate in a human readable form with:

openssl x509 -in test.pem -text

It should display something similar to this example-output

Import The Root-Certificate Of The CA And The Signed Certificate Into The Keystore

If you want your clients, that do only know the root-certificate of your CA, to trust your Java-service, you have to build up a Chain-of-Trust, that leads from the known root-certificate to the signed certificate, that your service uses to authenticate itself.
(Note: SSL-encryption always includes the authentication of the service a clients connects to through its certificate!)
In our case, that chain only has two entries, because our certificate was directly signed by the root-certificate.
Therefore, you have to import the root-certificate (ca-cert) and your signed certificate (test.pem) into a keystore and make that keystore available to the Java-service, in order to enable it to authentificate itself using the signed certificate, when a client connects.

Import the root-certificate of the CA:

keytool \
 -keystore test.jks -storepass confidential \
 -import -alias ca-root -file ca-cert -noprompt

Import the signed certificate (this will overwrite the unsigned version):

keytool \
 -keystore test.jks -storepass confidential \
 -import -alias test -file test.pem

That’s it: we are done!

You can validate the contents of the created keystore with:

keytool \
 -keystore test.jks -storepass confidential \
 -list -v

It should display something similar to this example-output

To authenticate service A against client B you will have to:

  • make the keystore test.jks available to the service A
  • make the truststore truststore.jks available to the client B

If you want, that your clients also authentificate themselfs to your services, so that only clients with a trusted certificate can connect (2-Way-Authentication), client B also needs its own signed certificate to authenticate against service A and service A also needs access to the truststore, to be able to trust that certificate.

Simple Example-Scripts To Create A Private CA And Self-Signed Certificates With SAN-Extension

The following two scripts automate the presented steps and may be useful, when setting up a private CA for Java-development:

  • Run create-ca.sh to create the root-certificate for the CA and import it into a truststore (creates ca-cert and ca-key and the truststore truststore.p12)
  • Run gencert.sh CN to create a certificate for the common name CN, sign it using the private CA (also exemplarily adding alternative names) and building up a valid Chain-of-Trust in a keystore (creates CN.pem and the keystore CN.p12)
  • Global options can be set in the configuration file settings.conf

Read the source for more options…

Differing from the steps shown above, these scripts use the keystore-format PKCS12.
This is, because otherwise, keytool is nagging about the non-standard default-format JKS in each and every step.

Note: PKCS12 does not distinguish between a store-password and a key-password. Hence, only a store-passwort is specified in the scripts.

Encrypt Communication Between Kafka And ZooKeeper With TLS

TL;DR

  1. Download and unpack zookeeper+tls.tgz.
  2. Run README.sh for a fully automated example of the presented setup.

Copy and paste to execute the two steps on Linux:

curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv && cd zookeeper+tls && ./README.sh

A german translation of this article can be found on http://trion.de.

Current Kafka Cannot Encrypt ZooKeeper-Communication

Up until now (Version 2.3.0 of Apache Kafka) it is not possible, to encrypt the communication between the Kafka-Brokers and their ZooKeeper-ensemble.
This is not possiible, because ZooKeeper 3.4.13, which is shipped with Apache Kafka 2.3.0, lacks support for TLS-encryption.

The documentation deemphasizes this, with the observation, that usually only non-sensitive data (configuration-data and status information) is stored in ZooKeeper and that it would not matter, if this data is world-readable, as long as it can be protected against manipulation, which can be done through proper authentication and ACL’s for zNodes:

The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of znodes can cause cluster disruption. (Kafka-Documentation)

This quote obfuscates the elsewhere mentioned fact, that there are use-cases that store sensible data in ZooKeeper.
For example, if authentication via SASL/SCRAM or Delegation Tokens is used.
Accordingly, the documentation often stresses, that usually there is no need to make ZooKeeper accessible to normal clients.
Nowadays, only admin-tools need direct access to the ZooKeeper-ensemble.
Hence, it is stated as a best practice, to make the ensemble only available on a local network, hidden behind a firewall or such.

In cleartext: One must not run a Kafka-Cluster, that spans more than one data-center — or at least make sure, that all communication is tunneled through a virtual private network.

ZooKeeper 3.5.5 To The Rescue

On may the 20th 2019, version 3.5.5 of ZooKeeper has been released.
Version 3.5.5 is the first stable release of the 3.5.x branch, that introduces the support for TLS-encryption, the community has yearned for so long.
It supports the encryption of all communication between the nodes of a ZooKeeper-ensemble and between ZooKeeper-Servers and -Clients.

Part of ZooKeeper is a sophisticated client-API, that provide a convenient abstraction for the communication between clients and servers over the Atomic Broadcast Protocol.
The TLS-encryption is applied by this API transparently.
Because of that, all client-implementations can profit from this new feature through a simple library-upgrade from 3.4.13 to 3.5.5.

This article will walk you through an example, that shows how to carry out such a library-upgrade for Apache Kafka 2.3.0 and configure a cluster to use TLS-encryption, when communicating with a standalone ZooKeeper.

Disclaimer

The presented setup is ment for evaluation only!

It fiddles with the libraries, used by Kafka, which might cause unforseen issues.
Furthermore, using TLS-encryption in ZooKeeper requires one to switch from the battle-tested NIOServerCnxnFactory, which uses the NIO-API directly, to the newly introduced NettyServerCnxnFactory, which is build on top of Netty.

Recipe To Enable TLS Between Broker And ZooKeeper

The article will walk you step by step through the setup now.
If you just want to evaluate the example, you can jump to the download-links.

All commands must be executed in the same directory.
We recommend, to create a new directory for that purpose.

Download Kafka and ZooKeeper

First of all: Download version 2.3.0 of Apache Kafka and version 3.5.5 of Apache ZooKeeper:

curl -sc - http://ftp.fau.de/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz | tar -xzv
curl -sc - http://ftp.fau.de/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz | tar -xzv

Switch Kafka 2.3.0 from ZooKeeper 3.4.13 to ZooKeeper 3.5.5

Remove the 3.4.13-version from the libs-directory of Apache Kafka:

rm -v kafka_2.12-2.3.0/libs/zookeeper-3.4.14.jar

Then copy the JAR’s of the new version of Apache ZooKeeper into that directory. (The last JAR is only needed for CLI-clients, like for example zookeeper-shell.sh.)

cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-3.5.5.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-jute-3.5.5.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/netty-all-4.1.29.Final.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/commons-cli-1.2.jar kafka_2.12-2.3.0/libs/

That is all there is to do to upgrade ZooKeeper.
If you run one of the Kafka-commands, it will use ZooKeeper 3.5.5. from now on.

Create A Private CA And The Needed Certificates


You can read more about setting up a private CA in this post

Create the root-certificate for the CA and store it in a Java-truststore:

openssl req -new -x509 -days 365 -keyout ca-key -out ca-cert -subj "/C=DE/ST=NRW/L=MS/O=juplo/OU=kafka/CN=Root-CA" -passout pass:superconfidential
keytool -keystore truststore.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt

The following commands will create a self-signed certificate in zookeeper.jks.
What happens is:

  1. Create a new key-pair and certificate for zookeeper
  2. Generate a certificate-signing-request for that certificate
  3. Sign the request with the key of private CA and also add a SAN-extension, so that the signed certificate is also valid for localhost
  4. Import the root-certificate of the private CA into the keystore zookeeper.jks
  5. Import the signed certificate for zookeeper into the keystore zookeeper.jks


You can read more about creating self-signed certificates with multiple domains and building a Chain-of-Trust here

NAME=zookeeper
keytool -keystore $NAME.jks -storepass confidential -alias $NAME -validity 365 -genkey -keypass confidential -dname "CN=$NAME,OU=kafka,O=juplo,L=MS,ST=NRW,C=DE"
keytool -keystore $NAME.jks -storepass confidential -alias $NAME -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out $NAME.pem -days 365 -CAcreateserial -passin pass:superconfidential -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:$NAME,DNS:localhost")
keytool -keystore $NAME.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt
keytool -keystore $NAME.jks -storepass confidential -import -alias $NAME -file $NAME.pem

Repeat this with:

  • NAME=kafka-1
  • NAME=kafka-2
  • NAME=client

Now we have signed certificates for all participants in our small example, that are stored in separate keystores, each with a Chain-of-Trust set up, that is rooting in our private CA.
We also have a truststore, that will validate all these certificates, because it contains the root-certificate of the Chain-of-Trust: the certificate of our private CA.

Configure And Start ZooKeeper

We hightlight/explain only the configuration-options here, that are needed for TLS-encryption!

In our setup, the standalone ZooKeeper essentially needs two specially tweaked configuration files, to use encryption.

Create the file java.env:

SERVER_JVMFLAGS="-Xms512m -Xmx512m -Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory"
ZOO_LOG_DIR=.
  • The Java-Environmentvariable zookeeper.serverCnxnFactory switches the connection-factory to use the Netty-Framework.
    Without this, TLS is not possible!

Create the file zoo.cfg:

dataDir=/tmp/zookeeper
secureClientPort=2182
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.X509AuthenticationProvider
ssl.keyStore.location=zookeeper.jks
ssl.keyStore.password=confidential
ssl.trustStore.location=truststore.jks
ssl.trustStore.password=confidential
  • secureClientPort: We only allow encrypted connections!
    (If we want to allow unencrypted connections too, we can just specify clientPort additionally.)
  • authProvider.1: Selects authentification through client certificates
  • ssl.keyStore.*: Specifies the path to and password of the keystore, with the zookeeper-certificate
  • ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA

Copy the file log4j.properties into the current working directory, to enable logging for ZooKeeper (see also java.env):

cp -av apache-zookeeper-3.5.5-bin/conf/log4j.properties .

Start the ZooKeeper-Server:

apache-zookeeper-3.5.5-bin/bin/zkServer.sh --config . start
  • --config .: The script should search in the current directory for the configration data and certificates.

Konfigure And Start The Brokers


We hightlight/explain only the configuration-options and start-parameters here, that are needed to encrypt the communication between the Kafka-Brokers and the ZooKeeper-Server!

The other parameters shown here, that are concerned with SSL are only needed for securing the communication between the Brokers itself and between Brokers and Clients.
You can read all about them in the standard documentation.
In short: This example is set up, to use SSL for authentication between the brokers and SASL/PLAIN for client-authentification — both channels are encrypted with TLS.

TLS for the ZooKeeper Client-API is configured through Java-Environmentvariables.
Hence, most of the SSL-configuration for connecting to ZooKeeper has to be specified, when starting the broker.
Only the address and port for the connction itself is specified in the configuration-file.

Create the file kafka-1.properties:

broker.id=1
zookeeper.connect=zookeeper:2182
listeners=SSL://kafka-1:9193,SASL_SSL://kafka-1:9194
security.inter.broker.protocol=SSL
ssl.client.auth=required
ssl.keystore.location=kafka-1.jks
ssl.keystore.password=confidential
ssl.key.password=confidential
ssl.truststore.location=truststore.jks
ssl.truststore.password=confidential
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_consumer="pw4consumer" user_producer="pw4producer";
sasl.enabled.mechanisms=PLAIN
log.dirs=/tmp/kafka-1-logs
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=2
  • zookeeper.connect: If you allow unsecure connections too, be sure to specify the right port here!
  • All other options are not relevant for encrypting the connections to ZooKeeper

Start the broker in the background and remember its PID in the file KAFKA-1:

(
  export KAFKA_OPTS="
    -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
    -Dzookeeper.client.secure=true
    -Dzookeeper.ssl.keyStore.location=kafka-1.jks
    -Dzookeeper.ssl.keyStore.password=confidential
    -Dzookeeper.ssl.trustStore.location=truststore.jks
    -Dzookeeper.ssl.trustStore.password=confidential
  "
  kafka_2.12-2.3.0/bin/kafka-server-start.sh kafka-1.properties & echo $! > KAFKA-1
) > kafka-1.log &

Check the logfile kafka-1.log to confirm that the broker starts without errors!

  • zookeeper.clientCnxnSocket: Switches from NIO to the Netty-Framework.
    Without this, the ZooKeeper Client-API (just like the ZooKeeper-Server) cannot use TLS!
  • zookeeper.client.secure=true: Switches on TLS-encryption, for all connections to any ZooKeeper-Server
  • zookeeper.ssl.keyStore.*: Specifies the path to and password of the keystore, with the kafka-1-certificate
  • zookeeper.ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA


Do the same for kafka-2!
And do not forget, to adapt the config-file accordingly — or better: just download a copy...

Configure And Execute The CLI-Clients

All scripts from the Apache-Kafka-Distribution that connect to ZooKeeper are configured in the same way as seen for kafka-server-start.sh.
For example, to create a topic, you will run:

export KAFKA_OPTS="
  -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
  -Dzookeeper.client.secure=true
  -Dzookeeper.ssl.keyStore.location=client.jks
  -Dzookeeper.ssl.keyStore.password=confidential
  -Dzookeeper.ssl.trustStore.location=truststore.jks
  -Dzookeeper.ssl.trustStore.password=confidential
"
kafka_2.12-2.3.0/bin/kafka-topics.sh \
  --zookeeper zookeeper:2182 \
  --create --topic test \
  --partitions 1 --replication-factor 2

Note: A different keystore is used here (client.jks)!

CLI-clients, that connect to the brokers, can be called as usual.

In this example, they use an encrypted listener on port 9194 (for kafka-1) and are authenticated using SASL/PLAIN.
The client-configuration is kept in the files consumer.config and producer.config.
Take a look at that files and compare them with the broker-configuration above.
If you want to lern more about securing broker/client-communication, we refere you to the official documentation.


If you have trouble to start these clients, download the scripts and take a look at the examples in README.sh

TBD: Further Steps To Take...

This recipe only activates TLS-encryption between Kafka-Brokers and a Standalone ZooKeeper.
It does not show, how to enable TLS between ZooKeeper-Nodes (which should be easy) or if it is possible to authenticate Kafka-Brokers via TLS-certificates. These topics will be covered in future articles...

Fully Automated Example Of The Presented Setup

Download and unpack zookeeper+tls.tgz for an evaluation of the presented setup:

curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv

The archive contains a fully automated example.
Just run README.sh in the unpacked directory.

It downloads the required software, carries out the library-upgrade, creates the required certificates and starts a standalone ZooKeeper and two Kafka-Brokers, that use TLS to encrypt all communication.
It also executes a console-consumer and a console-producer, that read and write to a topic, and a zookeeper-shell, that communicates directly with the ZooKeeper-node, to proof, that the setup is working.
The ZooKeeper and the Brokers-instances are left running, to enable the evaluation of the fully encrypted cluster.

Usage

  • Run README.sh, to execute the automated example
  • After running README.sh, the Kafka-Cluster will be still running, so that one can experiment with commands from README.sh by hand
  • README.sh can be executed repeatedly: it will skip all setup-steps, that are already done automatically
  • Run README.sh stop, to stop the Kafka-Cluster (it can be restarted by re-running README.sh)
  • Run README.sh cleanup, to stop the Cluster and remove all created files and data (only the downloaded packages will be left untouched)

Separate Downloads For The Packaged Files

Show Spring-Boot Auto-Configuration-Report When Running Via “mvn spring-boot:run”

There are a lot of explanations, how to turn on the Auto-Configuration-Report offered by Spring-Boot to debug the configuration of ones app.
For an good example take a look at this little Spring boot troubleshooting auto-configuration guide.
But most often, when I want to see the Auto-Configuration-Report, I am running my app via mvn:spring-boot:run.
And, unfortunatly, none of the guids you can find by google tells you, how to turn on the Auto-Configuration-Report in this case.
Hence, I hope I can help out, with this little tip.

How To Turn On The Auto-Configuration-Report When Running mvn spring-boot:run

The report is shown, if the logging for org.springframework.boot.autoconfigure.logging is set to DEBUG.
The most simple way to do that, is to add the following line to your src/main/resources/application.properties:

logging.level.org.springframework.boot.autoconfigure.logging=DEBUG

I was not able, to enable the logging via a command-line-switch.
The seemingly obvious way to add the property to the command line with a -D like this:

mvn spring-boot:run -Dlogging.level.org.springframework.boot.autoconfigure.logging=DEBUG

did not work for me.
If anyone could point out, how to do that in a comment to this post, I would be realy grateful!

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Develop a Facebook-App with Spring-Social – Part VII: What is Going On On The Wire

In this series of Mini-How-Tow’s I will describe how to develop a facebook app with the help of Spring-Social

In the last part of this series, I showed you, how you can sign-in your users through the signed_request, that is send to your canvas-page.

In this part, I will show you, how to turn on logging of the HTTP-requests, that your app sends to, and the -responses it recieves from the Facebook Graph-API.

The Source is With You

You can find the source-code on http://juplo.de/git/examples/facebook-app/
and browse it via gitweb.
Check out part-07 to get the source for this part of the series.

Why You Want To Listen On The Wire

If you are developing your app, you will often wonder, why something does not work as expected.
In this case, it is often very usefull to be able to debug the communitation between your app and the Graph-API.
But since all requests to the Graph-API are secured by SSL you can not simply listen in with tcpdump or wireshark.

Fortunately, you can turn on the debugging of the underling classes, that process theses requests, to sidestep this problem.

Introducing HttpClient

In its default-configuration, the Spring Framework will use the HttpURLConnection, which comes with the JDK, as http-client.
As described in the documentation, some advanced methods are not available, when using HttpURLConnection
Besides, the package HttpClient, which is part of Apaches HttpComponents is a much more mature, powerful and configurable alternative.
For example, you easily can plug in connection pooling, to speed up the connection handling, or caching to reduce the amount of requests that go through the wire.
In production, you should always use this implementation, instead of the default-one, that comes with the JDK.

Hence, we will switch our configuration to use the HttpClient from Apache, before turning on the debug-logging.

Switching From Apaches HttpCompnents To HttpClient

To siwtch from the default client, that comes with the JDK to Apaches HttpClient, you have to configure an instance of HttpComponentsClientHttpRequestFactory as HttpRequestFactory in your SocialConfig:

@Bean
public HttpComponentsClientHttpRequestFactory requestFactory(Environment env)
{
  HttpComponentsClientHttpRequestFactory factory =
      new HttpComponentsClientHttpRequestFactory();
  factory.setConnectTimeout(
      Integer.parseInt(env.getProperty("httpclient.timeout.connection"))
      );
  factory.setReadTimeout(
      Integer.parseInt(env.getProperty("httpclient.timeout.read"))
      );
  return factory;
}

To use this configuration, you also have to add the dependency org.apache.httpcomonents:httpclient in your pom.xml.

As you can see, this would also be the right place to enable other specialized configuration-options.

Logging The Headers From HTTP-Requests And Responses

I configured a short-cut to enable the logging of the HTTP-headers of the communication between the app and the Graph-API.
Simply run the app with the additionally switch -Dhttpclient.logging.level=DEBUG

Take Full Control

If the headers are not enough to answer your questions, you can enable a lot more debugging messages.
You just have to overwrite the default logging-levels.
Read the original documentation of HttpClient, for more details.

For example, to enable logging of the headers and the content of all requests, you have to start your app like this:

mvn spring-boot:run \
    -Dfacebook.app.id=YOUR_ID \
    -Dfacebook.app.secret=YOUR_SECRET \
    -Dfacebook.app.canvas=https://apps.facebook.com/YOUR_NAMESPACE
    -Dlogging.level.org.apache.http=DEBUG \
    -Dlogging.level.org.apache.http.wire=DEBUG

The second switch is necessary, because I defined the default-level ERROR for that logger in our src/main/application.properties, to enable the short-cut for logging only the headers.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Develop a Facebook-App with Spring-Social – Part VI: Sign In Users Through The Canvas-Page

In this series of Mini-How-Tow’s I will describe how to develop a facebook app with the help of Spring-Social

In the last part of this series, we refactored our authentication-concept, so that it can be replaced by Spring Security later on more easy.

In this part, we will turn our app into a real Facebook-App, that is rendered inside Facebook and signs in users through the signed_request.

The Source is With You

You can find the source-code on http://juplo.de/git/examples/facebook-app/
and browse it via gitweb.
Check out part-06 to get the source for this part of the series.

What The *#&! Is a signed_request

If you add the platform Facebook Canvas to your app, you can present your app inside of Facebook.
It will be accessible on a URL like https://apps.facebook.com/YOUR_NAMESPACE then and if a (known!) user accesses this URL, facebook will send a signed_request, that already contains some data of this user an an authorization to retrieve more.

Sign In Users With signed_request In 5 Simple Steps

As I first tried to extend the simple example, this article-series is based on, I stumbled across multiple misunderstandings.
But now, as I guided you around all that obstacles, it is fairly easy to refine our app, so that is can sign in users through the signed_request, send to a Canvas-Page.

You just have to:

  1. Add the platform “Facebook Canvas” in the settings of your app and choose a canvas-URL.
  2. Reconfigure your app to support HTTPS, because Facebook requires the canvas-URL to be secured by SSL.
  3. Configure the CanvasSignInController.
  4. Allow the URL of the canvas-page to be accessed unauthenticated.
  5. Enable Sign-Up throw your canvas-page.

That is all, there is to do.
But now, step by step…

Step 1: Turn Your App Into A Canvas-Page

Go to the settings-panel of your app on https://developers.facebook.com/apps and click on Add Platform.
Choose Facebook Canvas.
Pick a secure URL, where your app will serve the canvas-page.

For example: https://localhost:8443.

Be aware, that the URL has to be publicly available, if you want to enable other users to access your app.
But that also counts for the Website-URL http://localhost:8080, that we are using already.

Just remember, if other people should be able to access your app later, you have to change these URL’s to something, they can access, because all the content of your app is served by you, not by Facebook.
A Canvas-App just embedds your content in an iFrame inside of Facebook.

Step 2: Reconfigure Your App To Support HTTPS

Add the following lines to your src/main/resources/application.properties:

server.port: 8443
server.ssl.key-store: keystore
server.ssl.key-store-password: secret

I have included a self-signed keystore with the password secret in the source, that you can use for development and testing.
But of course, later, you have to create your own keystore with a certificate that is signed by an official certificate authority, that is known by the browsers of your users.

Since your app now listens on 8443 an uses HTTPS, you have to change the URL, that is used for the platform “Website”, if you want your sign-in-page to continue to work in parallel to the sign-in through the canvas-page.

For now, you can simply change it to https://locahost:8443/ in the settings-panel of your app.

Step 3: Configure the CanvasSignInController

To actually enable the automatic handling of the signed_request, that is, decoding the signed_request and sign in the user with the data provided in the signed_request, you just have to add the CanvasSignInController as a bean in your SocialConfig:

@Bean
public CanvasSignInController canvasSignInController(
    ConnectionFactoryLocator connectionFactoryLocator,
    UsersConnectionRepository usersConnectionRepository,
    Environment env
    )
{
  return
      new CanvasSignInController(
          connectionFactoryLocator,
          usersConnectionRepository,
          new UserCookieSignInAdapter(),
          env.getProperty("facebook.app.id"),
          env.getProperty("facebook.app.secret"),
          env.getProperty("facebook.app.canvas")
          );
}

Step 4: Allow the URL Of Your Canvas-Page To Be Accessed Unauthenticated

Since we have “secured” all of our pages except of our sign-in-page /signin*, so that they can only be accessed by an authenticated user, we have to explicitly allow unauthenticated access to our new special sign-in-page.

To achieve that, we have to refine our UserCookieInterceptor as follows.
First add a pattern for all pages, that are allowed to be accessed unauthenticated:

private final static Pattern PATTERN = Pattern.compile("^/signin|canvas");

Then match the requests against this pattern, instead of the fixed string /signin:

if (PATTERN.matcher(request.getServletPath()).find())
  return true;

Step 5: Enable Sign-Up Through Your Canvas-Page

Facebook always sends a signed_request to your app, if a user visits your app through the canvas-page.
But on the first visit of a user, the signed_request does not authenticate the user.
In this case, the only data that is presented to your page is the language and locale of the user and his or her age.

Because the data, that is needed to sign in the user, is missing, the CanvasSignInController will issue an explicit authentication-request to the Graph-API via a so called Server-Side Log-In.
This process includes a redirect to the Login-Dialog of Facebook and then a second redirect back to your app.
It requires the specification of a full absolute URL to redirect back to.

Since we are configuring the canvas-login-in, we want, that new users are redirected to the canvas-page of our app.
Hence, you should use the Facebook-URL of your app: https://apps.facebook.com/YOUR_NAMESPACE.
This will result in a call to your canvas-page with a signed_request, that authenticates the new user, if the user accepts to share the requested data with your app.

Any other page of your app would work as well, but the result would be a call to the stand-alone version of your app (the version of your app that is called the “Website”-platform of your app by Facebook), meaning, that your app is not rendered inside of Facebook.
Also it requires one more call of your app to the Graph-API to actually sign-in the new user, because Facebook sends the signed_request only the canvas-page of your app.

To specify the URL I have introduced a new attribute facebook.app.canvas that is handed to the CanvasSignInController.
You can specifiy it, when starting your app:

mvn spring-boot:run \
    -Dfacebook.app.id=YOUR_ID \
    -Dfacebook.app.secret=YOUR_SECRET \
    -Dfacebook.app.canvas=https://apps.facebook.com/YOUR_NAMESPACE

Be aware, that this process requires the automatic sign-up of new users, that we enabled in part 3 of this series.
Otherwise, the user would be redirected to the sign-up-page of your application, after he allowed your app to access the requested data.
Obviously, that would be very confusing for the user, so we really nead automati sign-up in this use-case!

Coming Next…

In the next part of this series, I will show you, how you can debug the calls, that Spring Social makes to the Graph-API, by turning on the debugging of the classes, that process the HTTP-requests and -responses, that your app is making.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Develop a Facebook-App with Spring-Social – Part V: Refactor The Redirect-Logic

In this series of Mini-How-Tow’s I will describe how to develop a facebook app with the help of Spring-Social

In the last part of this series, we reconfigured our app, so that users are signed in after an authentication against Facebook and new users are signed up automatically on the first visit.

In this part, we will refactor our redirect-logic for unauthenticated users, so that it more closely resembles the behavior of Spring Social, hence, easing the planed switch to that technology in a feature step.

The Source is With You

You can find the source-code on http://juplo.de/git/examples/facebook-app/
and browse it via gitweb.
Check out part-05 to get the source for this part of the series.

Mimic Spring Security

To stress that again: our simple authentication-concept is only meant for educational purposes. It is inherently insecure!
We are not refining it here, to make it better or more secure.
We are refining it, so that it can be replaced with Spring Security later on, without a hassle!

In our current implementation, a user, who is not yet authenticated, would be redirected to our sign-in-page only, if he visits the root of our webapp (/).
To move all redirect-logic out of HomeController and redirect unauthenicated users from all pages to our sign-in-page, we can simply modify our interceptor UserCookieInterceptor, which already intercepts each and every request.

We refine the method preHandle, so that it redirects every request to our sign-in-page, that is not authenticated:

@Override
public boolean preHandle(
    HttpServletRequest request,
    HttpServletResponse response,
    Object handler
    )
    throws
      Exception
{
  if (request.getServletPath().startsWith("/signin"))
    return true;

  String user = UserCookieGenerator.INSTANCE.readCookieValue(request);
  if (user != null)
  {
    if (!repository
        .findUserIdsConnectedTo("facebook", Collections.singleton(user))
        .isEmpty()
        )
    {
      LOG.info("loading user {} from cookie", user);
      SecurityContext.setCurrentUser(user);
      return true;
    }
    else
    {
      LOG.warn("user {} is not known!", user);
      UserCookieGenerator.INSTANCE.removeCookie(response);
    }
  }

  response.sendRedirect("/signin.html");
  return false;
}

If the user, that is identified by the cookie, is not known to Spring Security, we send a redirect to our sign-in-page and flag the request as already handled, by returning false.
To prevent an endless loop of redirections, we must not redirect request, that were already redirected to our sign-in-page.
Since these requests hit our webapp as a new request for the different location, we can filter out and wave through at the beginning of this method.

Run It!

That is all there is to do.
Run the app and call the page http://localhost:8080/profile.html as first request.
You will see, that you will be redirected to our sigin-in-page.

Cleaning Up Behind Us…

As it is now not possible, to call any page except the sigin-up-page, without beeing redirected to our sign-in-page, if you are not authenticated, it is impossible to call any page without being authenticated.
Hence, we can (and should!) refine our UserIdSource, to throw an exception, if that happens anyway, because it has to be a sign for a bug:

public class SecurityContextUserIdSource implements UserIdSource
{

  @Override
  public String getUserId()
  {
    Assert.state(SecurityContext.userSignedIn(), "No user signed in!");
    return SecurityContext.getCurrentUser();
  }
}

Coming Next…

In the next part of this series, we will enable users to sign in through the canvas-page of our app.
The canvas-page is the page that Facebook embeds into its webpage, if we render our app inside of Facebook.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung

Develop a Facebook-App with Spring-Social – Part IV: Signing In Users

In this series of Mini-How-Tow’s I will describe how to develop a facebook app with the help of Spring-Social

In the last part of this series, we tried to teach Spring Social how to remember our signed in users and learned, that we have to sign in a user first.

In this part, I will show you, how you sign (and automatically sign up) users, that are authenticated via the Graph-API.

The Source is With You

You can find the source-code on http://juplo.de/git/examples/facebook-app/
and browse it via gitweb.
Check out part-04 to get the source for this part of the series.

In Or Up? Up And In!

In the last part of our series we ran in the problem, that we wanted to connect several (new) users to our application.
We tried to achieve that, by extending our initial configuration.
But the mistake was, that we tried to connect new users.
In the world of Spring Social we can only connect a known user to a new social service.

To know a user, Spring Social requires us to sign in that user.
But again, if you try to sign in a new user, Spring Social requires us to sign up that user first.
Because of that, we had already implemented a ConnectionSignUp and configured Spring Social to call it, whenever it does not know a user, that was authenticated by Facebook.
If you forget that (or if you remove the according configuration, that tells Spring Social to use our ConnectionSignUp), Spring Social will redirect you to the URL /signup — a Sign-Up page you have to implement — after a successfull authentication of a user, that Spring Social does not know yet.

The confusion — or, to be honest, my confusion — about sign in and sign up arises from the fact, that we are developing a Facebook-Application.
We do not care about signing up users.
Each user, that is known to Facebook — that is, who has signed up to Facebook — should be able to use our application.
An explicit sign-up to our application is not needed and not wanted.
So, in our use-case, we have to implement the automatically sign-up of new users.
But Spring Social is designed for a much wider range of use cases.
Hence, it has to distinguish between sign-in and sign-up.

Implementation Of The Sign-In

Spring Social provides the interface SignInAdapter, that it calls every time, it has authenticated a user against a social service.
This enables us, to be aware of that event and remember the user for subsequent calls.
Our implementation stores the user in our SecurityContext to sign him in and creates a cookie to remember him for subsequent calls:

public class UserCookieSignInAdapter implements SignInAdapter
{
  private final static Logger LOG =
      LoggerFactory.getLogger(UserCookieSignInAdapter.class);


  @Override
  public String signIn(
      String user,
      Connection connection,
      NativeWebRequest request
      )
  {
    LOG.info(
        "signing in user {} (connected via {})",
        user,
        connection.getKey().getProviderId()
        );
    SecurityContext.setCurrentUser(user);
    UserCookieGenerator
        .INSTANCE
        .addCookie(usSigning In Userser, request.getNativeResponse(HttpServletResponse.class));

    return null;
  }
}

It returns null, to indicate, that the user should be redirected to the default-URL after an successful sign-in.
This URL can be configured in the ProviderSignInController and defaults to /, which matches our use-case.
If you return a string here, for example /welcome.html, the controller would ignore the configured URL and redirect to that URL after a successful sign-in.

Configuration Of The Sign-In

To enable the Sign-In, we have to plug our SignInAdapter into the ProviderSignInController:

@Bean
public ProviderSignInController signInController(
    ConnectionFactoryLocator factoryLocator,
    UsersConnectionRepository repository
    )
{
  ProviderSignInController controller = new ProviderSignInController(
      factoryLocator,
      repository,
      new UserCookieSignInAdapter()
      );
  return controller;
}

Since we are using Spring Boot, an alternative configuration would have been to just create a bean-instance of our implementation named signInAdapter.
Then, the auto-configuration of Spring Boot would discover that bean, create an instance of ProviderSignInController and plug in our implementation for us.
If you want to learn, how that works, take a look at the implementation of the auto-configuration in the class SocialWebAutoConfiguration, lines 112ff.

Run it!

If you run our refined example and visit it after impersonating different facebook-users, you will see that everything works as expected now.
If you visit the app for the first time (after a restart) with a new user, the user is signed up and in automatically and a cookie is generated, that stores the Facebook-ID of the user in the browser.
On subsequent calls, his ID is read from this cookie and the corresponding connection is restored from the persistent store by Spring Social.

Coming Next…

In the next part of this little series, we will move the redirect-if-unknown logic from our HomeController into our UserCookieInterceptor, so that the behavior of our so-called “security”-concept more closely resembles the behavior of Spring Security.
That will ease the migration to that solution in a later step.

Perhaps you want to skip that, rather short and boring step and jump to the part after the next, that explains, how to sign in users by the signed_request, that Facebook sends, if you integrate your app as a canvas-page.

Funded by the Europian Union

This article was published in the course of a
resarch-project,
that is funded by the European Union and the federal state Northrhine-Wetphalia.


Europäische Union: Investitionen in unsere Zukunft - Europäischer Fonds für regionale Entwicklung
EFRE.NRW 2014-2020: Invesitionen in Wachstum und Beschäftigung