Skip to main content

Chrome Extension (part 1): Injecting External Scripts into a Web Page

Recently, I was required to develop a chrome extension for some executives. This plugin only targeted a few of our most visited websites. There was a lot of approaches to implement the extension and all our use cases.
One particular challenge was trying to inject scripts from our own CDN into specific web pages. the scripts were required to interact with the page and all variables possible. Since there is a script isolation thing for Chrome Extensions, I felt this was a good challenge to take up in my spare time.

Here is a sample of my content script.

content.js
document.onreadystatechange = function () {
console.log("document state is:" + document.readyState);
if (document.readyState == "complete") {
let randAntiCache = Math.random().toString().split('.')[1];

let scriptURL4="http://localhost.test/testscript.js?v="+randAntiCache;
let scriptnode4 = document.createElement("script");
scriptnode4.setAttribute("src", scriptURL4);
document.body.appendChild(scriptnode4);
}

}


Even though I had a background script to remove the response Headers I still got the error below.
 After reading so many articles I conclude that this could not be achieved without running chrome from the command line, the good thing about running chrome from the command line is that it allows you to specify that chrome should disable certain security features during startup. So I employed the flags below.

 --allow-running-insecure-content --disable-web-security --disable-gpu --user-data-dir=%LOCALAPPDATA%\Chromium\Temp

In order to make this flag seem like a semi-permanent fix. I created a chrome shortcut on my desktop. right-click on the shortcut and click 'Properties'. and I pasted the long line below into the target field.
C:\Users\XXX\AppData\Local\Chromium\Application\chrome.exe --allow-running-insecure-content --disable-web-security --disable-gpu --user-data-dir=%LOCALAPPDATA%\Chromium\Temp

The path to the chrome application is
C:\Users\XXX\AppData\Local\Chromium\Application\chrome.exe
This path may be different in your case but it will be generated automatically for you when you create a shortcut. you can also obtain it by viewing any chrome shortcut created by your computer during installation.

I double-clicked the newly modified shortcut to start chrome, then I reload the extension and reload the web page. As you can see below, we now have a warning and not the usual error observed moments ago, the script gets loaded as expected and also it is executed within the page.


there are still some other issues encountered that will eventually be solved. For example; the recommended architecture supports that only one script should be injected by the extension's content script. This script is called the father-script(composer.js). the father-script is required to detect the page URL and then append other scripts(e.g theme.js) to the page's body tag, or somehow inject more scripts into the context of the page.

whilst it was easy to inject the father script(composer.js) into the webpage using an extension, the father script could not inject other scripts(theme.js) into the page. the father script was only successful when the webpage had loose content-policy headers defined. the below is a typical scenario where content-policy headers were properly defined.

* composer was injected successfully by the extension, but composer could not inject theme.js


This experimentation shows that it is quite easy to inject scripts into a webpage as an extension developer. But there is enough security to make sure that injected scripts can not inject other scripts.
As you might observe the error obtained is not about mixed content but content security error. if this host never declared the content security or defined it as a wild card, then the second-order injection would have worked.


there are suggestions that the extension should inject the other scripts just as the first was loaded. it is all about flexibility. currently, I am exploring the onHeadersReceived event of the chrome extension for this. the official docs said you can only modify the content policy headers if you specify extra-headers as an option. It also advised that this is a resource-intensive approach. but for me, it is worth trying just for experimentation.

Let me know in the comments how you are handling these situations in your dev ops.
feel free to ask for more clarity in the comment, I might just provide a GitHub link if required.

Comments

Popular Articles

Resolving Incompatible Builds of PHP and Apache in Laragon: Add multiple versions of Apache

 As developers, it's often the case that different versions of PHP and server tools are installed quickly without taking into account the architecture for which they were built. In this article, we will focus on solving a specific error that arises when running incompatible builds of PHP and Apache. Recently, I encountered an issue when downgrading my Laragon setup from PHP 8.1 (x64) to PHP 7.x (x86). This caused compatibility problems with Apache, and I received an error message indicating that my PHP and Apache builds were not compatible, as the Apache installed was built for x64 versions of PHP. In this article, I'll provide a step-by-step guide on how to install multiple versions of Apache on Laragon. By following these steps, you can resolve any similar issues that you may encounter. Step 1: Download the Latest Version of Apache Visit https://www.apachelounge.com/download/additional/ to download the latest version of Apache. The homepage provides quick access to 64-bit ve...

Installing Docker on Ubuntu 18.04 ([SOLVED]- libseccomp2 version 2.3.1 installed but 2.4 required)

 Introduction: If you're attempting to install Docker on Ubuntu 18.04 Bionic Beaver, which is known for having installation issues, this comprehensive guide aims to address common errors you may encounter during the process. Our goal is to save you time searching for answers on StackOverflow by providing clear instructions. The guide is based on the official Docker documentation available at https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository. Step 1: Updating Package Lists To begin, run the following command to update your package lists using the "apt-get update" command: sudo apt-get update Note: If you encounter any errors during the update process, it may be due to the older Ubuntu 18.04 release. In such cases, you can refer to the Configure DNS server IP to resolve errors with apt-get update  guide for assistance. This guide will help you address any DNS-related issues that might be preventing successful updates. Step 2: Installing Dependencie...

Linux - Configure DNS server IP to resolve errors with apt-get update

Perhaps you have a new install of Ubuntu, and you are about to install docker or any other Linux packages. The first step is to update the application repositories using the sudo apt update command. There is a very small possibility that you will get the error below if you are using a fresh install of the Ubuntu 18.04 LTS (Bionic Beaver). This error simply means that Linux is unable to connect to the official repository likely due to a DNS configuration or a network issue. Here we try to help you resolve the DNS issue. To resolve this, you must specify a DNS in your network settings. Well, there is one popular DNS that works well: Google's DNS. Follow through, please. STEP 1: Go to the settings page of Linux (Ubuntu) STEP 2: This should reveal a plethora of options. On the left pane, simply scroll down and click Network as shown below. This reveals your different network connections, for me, “enp0s3” is the adapter connecter to the internet. So, I must configure DNS for “enp0s3”. ...