Skip to main content

Chrome Extension (part 1): Injecting External Scripts into a Web Page

Recently, I was required to develop a chrome extension for some executives. This plugin only targeted a few of our most visited websites. There was a lot of approaches to implement the extension and all our use cases.
One particular challenge was trying to inject scripts from our own CDN into specific web pages. the scripts were required to interact with the page and all variables possible. Since there is a script isolation thing for Chrome Extensions, I felt this was a good challenge to take up in my spare time.

Here is a sample of my content script.

content.js
document.onreadystatechange = function () {
console.log("document state is:" + document.readyState);
if (document.readyState == "complete") {
let randAntiCache = Math.random().toString().split('.')[1];

let scriptURL4="http://localhost.test/testscript.js?v="+randAntiCache;
let scriptnode4 = document.createElement("script");
scriptnode4.setAttribute("src", scriptURL4);
document.body.appendChild(scriptnode4);
}

}


Even though I had a background script to remove the response Headers I still got the error below.
 After reading so many articles I conclude that this could not be achieved without running chrome from the command line, the good thing about running chrome from the command line is that it allows you to specify that chrome should disable certain security features during startup. So I employed the flags below.

 --allow-running-insecure-content --disable-web-security --disable-gpu --user-data-dir=%LOCALAPPDATA%\Chromium\Temp

In order to make this flag seem like a semi-permanent fix. I created a chrome shortcut on my desktop. right-click on the shortcut and click 'Properties'. and I pasted the long line below into the target field.
C:\Users\XXX\AppData\Local\Chromium\Application\chrome.exe --allow-running-insecure-content --disable-web-security --disable-gpu --user-data-dir=%LOCALAPPDATA%\Chromium\Temp

The path to the chrome application is
C:\Users\XXX\AppData\Local\Chromium\Application\chrome.exe
This path may be different in your case but it will be generated automatically for you when you create a shortcut. you can also obtain it by viewing any chrome shortcut created by your computer during installation.

I double-clicked the newly modified shortcut to start chrome, then I reload the extension and reload the web page. As you can see below, we now have a warning and not the usual error observed moments ago, the script gets loaded as expected and also it is executed within the page.


there are still some other issues encountered that will eventually be solved. For example; the recommended architecture supports that only one script should be injected by the extension's content script. This script is called the father-script(composer.js). the father-script is required to detect the page URL and then append other scripts(e.g theme.js) to the page's body tag, or somehow inject more scripts into the context of the page.

whilst it was easy to inject the father script(composer.js) into the webpage using an extension, the father script could not inject other scripts(theme.js) into the page. the father script was only successful when the webpage had loose content-policy headers defined. the below is a typical scenario where content-policy headers were properly defined.

* composer was injected successfully by the extension, but composer could not inject theme.js


This experimentation shows that it is quite easy to inject scripts into a webpage as an extension developer. But there is enough security to make sure that injected scripts can not inject other scripts.
As you might observe the error obtained is not about mixed content but content security error. if this host never declared the content security or defined it as a wild card, then the second-order injection would have worked.


there are suggestions that the extension should inject the other scripts just as the first was loaded. it is all about flexibility. currently, I am exploring the onHeadersReceived event of the chrome extension for this. the official docs said you can only modify the content policy headers if you specify extra-headers as an option. It also advised that this is a resource-intensive approach. but for me, it is worth trying just for experimentation.

Let me know in the comments how you are handling these situations in your dev ops.
feel free to ask for more clarity in the comment, I might just provide a GitHub link if required.

Comments

Popular Articles

[SOLVED] Linux - Issues installing Docker on Ubuntu - libseccomp2 version 2.3.1 installed but 2.4 required

This article has been improved for a better understanding - goto  https://splashcoder.blogspot.com/2023/07/installing-docker-on-ubuntu-1804-solved.html There is a possibility that you are trying to install docker. There is a very comprehensive guide at https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository. The linked guide serves as a reference for this article. We try to address the common errors in this article. Just so you won’t have to scour the entire answers on StackOverflow. Step 1: The first thing is to run our famous "apt update" command. So run the command below. sudo apt-get update You may observe that there are some errors. And YES! we are using a fairly old Ubuntu 18.04 Bionic Beaver Release. It seems perfect for this as most people have issues installing docker anyways. To resolve this, you may refer to  Configure DNS server IP to resolve errors with apt-get update Step 2: Following the Docker article, we should run the commands below. sudo...

Installing Docker on Ubuntu 18.04 ([SOLVED]- libseccomp2 version 2.3.1 installed but 2.4 required)

 Introduction: If you're attempting to install Docker on Ubuntu 18.04 Bionic Beaver, which is known for having installation issues, this comprehensive guide aims to address common errors you may encounter during the process. Our goal is to save you time searching for answers on StackOverflow by providing clear instructions. The guide is based on the official Docker documentation available at https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository. Step 1: Updating Package Lists To begin, run the following command to update your package lists using the "apt-get update" command: sudo apt-get update Note: If you encounter any errors during the update process, it may be due to the older Ubuntu 18.04 release. In such cases, you can refer to the Configure DNS server IP to resolve errors with apt-get update  guide for assistance. This guide will help you address any DNS-related issues that might be preventing successful updates. Step 2: Installing Dependencie...

How to Selectively Disable Timestamp Columns in Laravel Models

Introduction: In Laravel, the updated_at and created_at columns are timestamp columns that automatically update whenever a model is saved or created. However, there might be scenarios where you want to selectively disable one or both of these columns for specific models. Fortunately, Laravel provides a straightforward way to achieve this by customizing your model's const UPDATED_AT and const CREATED_AT constants. In this blog post, we will explore how to selectively disable timestamp columns in Laravel models, discuss the benefits of this approach, and guide you through the necessary steps. Step 1: Create a Laravel Model and Migration To demonstrate this process, let's assume we have a model called Download that represents a downloadable file in our application. If you don't have a model already, you can create one using the php artisan make:model Download command in your terminal. To generate the migration for the downloads table, run the following command: php ar...