Recently, a client approached us at Serpstat with a unique request for a custom data export format. This wasn’t a standard feature, and there was a real risk it would get lost in the shuffle of our ongoing development roadmap. The core of their need was to create an API wrapper for Serpstat’s RtApiSerpResultsProcedure.getUrlsSerpResultsHistory API method. However, they required a very specific output: a custom filtered data set presented in a wide format, specifically for a single date.
This presented an interesting challenge. Instead of deferring it or adding it to a lengthy backlog, I decided to take it on directly. My goal was to demonstrate our flexibility and commitment to client needs, even for highly specialized requests. Building this wrapper would involve not just querying the existing Serpstat API, but also developing the web app with a simple UI. This project, while seemingly niche, offered a valuable opportunity to explore the extensibility of our systems and deliver a tailored solution that went beyond our standard offerings.
UI PoC
Gemini helped me with the draft of the following prompt for further use in Gemini Canvas. It integrated both the client’s explicit requirements for the app and my own vision for its aesthetic and user experience. The resulting output from Gemini Canvas was really good, providing a strong foundation for the frontend. I made a few minor adjustments to achieve the final desired look and functionality.
Gemini Canvas Prompt
I want to create an interactive dashboard for Serpstat API method data visualization.
1. Dashboard Functionality
- The dashboard should be a single, static HTML page that is self-contained.
- It must connect to the Serpstat API using the https://api.serpstat.com/v4/ endpoint.
- The user should be able to input an API key. There should be a text that the user can get the API key by visiting the https://serpstat.com/users/profile/ page. The page should open in a separate window. Upon submission, the page will make the API calls to the RtApiProjectProcedure.getProjects API method.
Request example:
curl --request POST \
--url 'https://api.serpstat.com/v4/?token=123' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"id": "123",
"method": "RtApiProjectProcedure.getProjects",
"params": {
"page": 1,
"pageSize": 20
}
}'
Response example:
{
"id": "123",
"result": {
"data": [
{
"id": "1213108",
"projectName": "PUMA",
"domain": "puma.com",
"createdAt": "2024-01-09 11:10:57",
"group": "New Client",
"type": "owner",
"status": 9,
"enableTracking": false
}
],
"summary_info": {
"page": 1,
"page_total": 1,
"count": 20,
"total": 1
}
}
}
- The list of project IDs with corresponding project names should be shown to the user, and the user should be able to select the project. These should be an option to paste project ID manually. Upon submission, the page will make the API calls to the RtApiSearchEngineProcedure.getProjectRegions API method.
Request example:
curl --request POST \
--url 'https://api.serpstat.com/v4/?token=123' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"id": "123",
"method": "RtApiSearchEngineProcedure.getProjectRegions",
"params": {
"projectId": 853932
}
}'
Response example:
{
"id": "string",
"result": {
"projectId": 853932,
"regions": [
{
"id": 0,
"active": true,
"serpType": "string",
"deviceType": "string",
"searchEngine": "string",
"region": "string",
"country": "string",
"city": "string",
"langCode": "string"
}
],
"spent_limits": 0
}
}
- The list of region IDs should be shown to the user, and the user should be able to select the region. Upon selection, the page will make a call to the RtApiSerpResultsProcedure.getUrlsSerpResultsHistory API method. dateFrom and dateTo must be equal to the selected date.
Example request:
curl --request POST \
--url 'https://api.serpstat.com/v4/?token=123' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"id": "123",
"method": "RtApiSerpResultsProcedure.getKeywordsSerpResultsHistory",
"params": {
"projectId": 1269282,
"projectRegionId": 393639,
"page": 1,
"pageSize": 20,
"dateFrom": "2025-04-02",
"dateTo": "2025-05-02",
"sort": "date",
"order": "asc",
"withTags": true
}
}'
Example response:
{
"id": "123",
"result": {
"data": {
"projectId": 1269282,
"projectRegionId": 393639,
"keywords": [
{
"keyword": "walk in clinic tampa",
"frequency": 0,
"expectedUrl": "https://www.fasttrackurgentcare.com/walk-in-clinics-tampa/",
"history": [
{
"date": "2025-04-02",
"positions": [
{
"position": 1,
"url": "https://www.cvs.com/minuteclinic/clinic-locator/fl/tampa/"
},
{
"position": 2,
"url": "https://www.fasttrackurgentcare.com/walk-in-clinics-tampa/"
},
{
"position": 3,
"url": "https://www.southtampaimmediatecare.com/"
},
{
"position": 4,
"url": "https://www.fasttrackurgentcare.com/"
},
{
"position": 5,
"url": "https://centracare.adventhealth.com/urgent-care/tampa"
},
{
"position": 6,
"url": "https://www.asouthtampaurgentcare.com/"
},
{
"position": 7,
"url": "https://www.yelp.com/search?find_desc=Walk+In+Clinics&find_loc=Tampa%2C+FL"
},
{
"position": 8,
"url": "https://baycare.org/locations/b/baycare-urgent-care-south-tampa"
},
{
"position": 9,
"url": "https://www.firstcarewalkinclinic.org/"
},
{
"position": 10,
"url": "https://www.mymdnow.com/locations/hillsborough/usf"
},
...
{
"position": 99,
"url": "https://rogersbh.org/"
},
{
"position": 100,
"url": "https://www.pediatricassociates.com/"
}
]
}
],
"tags": []
}
]
},
"summary_info": {
"page": 1,
"page_total": 1,
"count": 20,
"total": 1,
"sort": "date",
"order": "asc"
},
"spent_limits": 0
}
}
- The text field should be shown where the user can input a list of competitor domains or subdomains, each on a new line. If any data was input into the competitors field, there should be a button to filter the results of the RtApiSerpResultsProcedure.getUrlsSerpResultsHistory API method call to include only the analysed domain and competitor domains.
2. Data Presentation and Styling
- The dashboard should have a dark theme throughout. The highlighted elements in the dropdowns must be colored in dark blue.
- The keywords data should be displayed as a single table with the following structure:
-- Keyword column
-- Search volume column (frequency parameter value in the API response)
-- Domain columns (domain or subdomain names should be extracted from each URL in the API response, for example, "https://cloud.google.com/bigquery/pricing" becomes "cloud.google.com") with position values
3. Technical Requirements
- API Key: API key must be used as "token" GET-parameter with every API call.
- Paginations Handling: All pages from every API response must be fetched.
- Dropdown Search: All dropdown elements must support a search option to be able to find any value.
- API Key Input: The API key input field must have autocomplete="off" to prevent browser suggestions.
- Error Handling: The application must handle API errors gracefully and display user-friendly error messages.
- Loading State: A spinner should be shown while data is being fetched from the API.
- Code Structure: The final code should be a single, complete HTML file with all the necessary CSS and JavaScript. All of the code should be contained within an immersive artifact.
- Visual Consistency: All dropdowns and buttons must be of the same height and perfectly aligned with each other.
- Data Export: There should be an option to copy the keywords table to the clipboard.
- Data Filtering: There should be an option to filter the data using the competitors' list after the keywords data is fetched.
- HTML File Naming: The HTML file should be named serpstat-rank-tracker-competitors.html.
UI Modifications
Transition to Gemini CLI for Refinement
I transitioned my workflow to the Gemini Command Line Interface (CLI), specifically for making iterative, minor adjustments to the project. This decision stemmed from the inherent limitations I encountered with Gemini Canvas, where certain modifications were either too complex to implement efficiently or resulted in unacceptably slow development times.
By utilizing the Gemini CLI, I aimed to gain greater control and precision over the refinement process, allowing for more agile and nuanced changes. To ensure I had sufficient resources for demanding tasks, I configured the Gemini CLI to leverage my Google Cloud Platform (GCP) Vertex AI credentials. This configuration allowed me to access higher Gemini 2.5 Pro limits.
Initial Successes and Subsequent Challenges
Following the refinement phase with the Gemini CLI, I started to test the app. Initial tests, conducted on my Serpstat rank tracker projects, yielded highly encouraging results, demonstrating the app’s effectiveness and accuracy within a controlled environment. However, the subsequent test on larger client projects revealed a significant bottleneck.
The app began to experience failures, often attributed to the sheer scale of the data and the demands placed on system resources. A particularly concerning issue was the excessive consumption of RAM, with a single browser tab consuming an unsustainable amount of memory. This indicated a fundamental architectural limitation when scaling to accommodate the extensive datasets characteristic of larger client accounts.
Strategic Shift: From Frontend to Hybrid Architecture
Recognizing these scalability and resource consumption issues, I made a strategic decision to pivot the app’s architecture. Instead of maintaining a purely frontend-driven solution, I decided to offload the existing frontend interface into a standalone HTML file. This move was a precursor to a more significant architectural shift: the integration of a backend component.
By introducing a backend, I could distribute computational load, manage larger datasets more efficiently, and mitigate the excessive RAM usage that plagued the frontend-only approach. This hybrid architecture would allow for more robust data processing, improved performance, and enhanced scalability, ultimately addressing the limitations encountered with the initial implementation on large client projects.
Backend Structure
I decided to develop the backend in R, primarily due to my existing maintenance of the serpstatr package. This package, readily available on CRAN, provides native support for the Serpstat API method required for data retrieval, streamlining the initial development process. I leveraged and adapted a pre-existing R script of mine, originally designed for multi-page data pulling, to incorporate the custom data transformation logic necessary for this app.
For exposing this data, the plumber package proved invaluable. It allowed for the creation of a robust API layer that initially formatted the raw Serpstat API output into an HTML table. I injected this HTML table directly into the frontend HTML, providing a visual representation of the data.
Initially, for enhanced table formatting and user interaction, I used the DataTables JavaScript library. DataTables offers a suite of common table operations out of the box, including filtering, sorting, and exporting capabilities. While this solution functioned effectively, it soon became apparent that the primary use case for the app’s data was simple CSV-like export. In this context, the interactive features and visual overhead introduced by DataTables created a negative user experience.
A more efficient solution was to use response serialization into various formats, which is natively supported by the plumber package, so I replaced HTML serialization with CSV serialization. The revised workflow dictated that upon fetching and formatting the data from the Serpstat API, the CSV file would be immediately downloaded by the client.
However, with this implementation change I completely broke the app. To resolve these issues, I sought assistance from Gemini CLI, providing it with detailed context in a GEMINI.md file:
You are creating a Serpstat Rank Tracker API wrapper app.
### General requirements
1. Front end is the static HTML file temp.html with the least amount of external dependencies.
2. Backend is the R code in the app.R file that uses the plumber package for API creation.
3. The app is deployed on the Google Cloud Run using the procedure described in deploy.sh file with placeholders substituted with real values. The Docker image is built from a Dockerfile.
Gemini fixed the issue and added some cool improvements, like the integration of a visual loader, which greatly enhanced the user experience by providing clear feedback during data processing. Furthermore, it optimized the handling of exported files, ensuring that proper and descriptive file names were automatically generated, making organization and retrieval much more efficient for users. These advancements collectively contributed to a more stable, user-friendly, and secure app.
Deploying The APP
I deployed the app to Google Cloud Run because it provides a comprehensive suite of features for logging, observability, and automatic scaling, all available out of the box, significantly streamlining the operational aspects of the deployment. This was my general deployment procedure:
set gcp_sa=GCP_SERVICE_ACCOUNT
set gcp_sa_key=PATH_TO_GCP_SERVICE_ACCOUNT_JSON_KEY
set project_id=GCP_PROJECT_ID
set app_name=APP-NAME
set default_zone=GCP_REGION
set docker_repo=%default_zone%-docker.pkg.dev
gcloud auth list
gcloud auth activate-service-account %gcp_sa% --key-file=%gcp_sa_key%
gcloud auth configure-docker %docker_repo%
gcloud artifacts repositories create %app_name% --repository-format=docker --location=%default_zone% --project=%project_id%
docker build -t %app_name% .
docker tag %app_name% %docker_repo%/%project_id%/%app_name%/%app_name%:latest
docker push %docker_repo%/%project_id%/%app_name%/%app_name%:latest
#gcloud artifacts docker images list %docker_repo%/$project_id$/%app_name%/%app_name%
gcloud run deploy %app_name% --image %docker_repo%/%project_id%/%app_name%/%app_name% --region=%default_zone% --ingress=all --max-instances=5 --cpu=1 --memory=1024Mi --project=%project_id%
Before proceeding with these commands, ensure that your environment is properly set up with the necessary tools and configurations. You will need to have gcloud, the command-line interface for Google Cloud, installed and configured on your system. This tool is essential for interacting with various Google Cloud services. In addition to gcloud, you will also have to install Docker. Docker is crucial for building, shipping, and running containerized apps, which is a common practice when deploying to Cloud Run.
Furthermore, a Google Cloud Platform service account is required here. You will need to create a JSON key for this service account and download it to your local machine. This key will be used by gcloud to authenticate your requests. If the service account’s key is ever leaked, the potential damage is limited to what that specific account could do. The attacker couldn’t delete your databases or access your billing information.
A service account follows the principle of least privilege. You can create it for a single purpose and grant it only the exact permissions (Identity and Access Management roles) it needs to do that one job. Specifically for this script, it should be granted roles that provide access to both GCP Artifact Registry and Cloud Run. Access to Artifact Registry is needed for storing and managing your container images, while access to Cloud Run is required for deploying and managing your containerized apps.
Should any of these prerequisites be missing or incorrectly configured, gcloud will typically provide a clear and concise error message, often accompanied by a solid explanation of how to rectify the issue and a link to the documentation. This makes troubleshooting straightforward and helps guide you toward the correct setup.
Think of it like this: a user account is like your personal driver’s license, identifying you as a specific person. A service account is like the key to a specific delivery truck; it’s not tied to a person, just a job, and it only has permission to operate that truck and access its delivery route.
The Dockerfile was like this:
FROM rocker/tidyverse
RUN apt-get -y update
RUN R -e "install.packages(c('plumber', 'glue', 'serpstatr'))"
WORKDIR /app
COPY . .
CMD ["R", "-e", "pr <- plumber::plumb('app.R'); pr$run(host = '0.0.0.0', port = as.numeric(Sys.getenv('PORT', 8080)))"]
Upon resolving the various deployment-related issues, gcloud successfully provided me with a functional link to the newly deployed app. I subsequently conducted thorough testing using several of Serpstat’s existing Rank Tracker projects, and the app performed flawlessly, consistently delivering accurate results.
While I recognize that the app could benefit from certain user experience enhancements, I believe it is prudent to first gather feedback directly from our client to ascertain if these improvements are genuinely necessary and prioritize them accordingly. Their initial impressions and specific use cases will be invaluable in guiding any further development efforts.
Final thoughts
I spent 8 hours on this app, a feat I believe would have been unattainable through our conventional development processes. Gemini Canvas is included within our existing Google Workspace accounts, negating any additional licensing costs. Furthermore, the expenses associated with Gemini CLI were negligible, with context utilization no more than 5%. This was due to the app’s compact code size based on the precise initial context provided. My primary motivation was to successfully address the client’s requirements, even in areas where I lacked prior expertise; in this regard, the AI tools were of great assistance.