How To Build A Niche Job Board with NextJS, Tailwind, MongoDB, & Algolia

Shaun Smerling
22 min readMay 14, 2023

--

In this in depth tutorial you’ll learn how to build a niche job board for any industry using NextJS, Tailwind, MongoDB, & Algolia. I’ll also be mixing in a bit of NodeJS & Python for scripts on the backend.

In the summer of 2022 I had the idea to build a job board for the DTC / eCommerce industry which is exactly what i’ll be basing this guide around.

Niche job boards continue to popularize amongst communities in need of a specific way to search for jobs for their roles. Job boards like Japan Dev, Ranch Work, and No Code Jobs are great examples of this

ecomportal.co, a job board for the eCommerce industry

Lastly, if there’s anything missing from this tutorial or you encounter any bugs — I strongly recommend to do your best to power through and debug.

Now that the word has been blessed with GPT-4, i’m confident you can figure this out on your own. Additionally, the skill of being able to problem solve and source solutions is a muscle you’ll want to be building anyway

Phase #1 ~ Sourcing Jobs + Storing Jobs:

Setting up NextJS:

To set up a simple nextJS app, head to your terminal and type in npx create-next-app app-name (replacing app-name with your app name ofcourse)

I’m a fan of using the page routing system, but I am aware that Next 13 leverages the app routing system. Since I use the page routing system in this tutorial, that is what i’ll recommend. What that means is when NextJS asks you if you want to use app router, kindly click no!

Lastly, you might pose the question of why we are even using NextJS for this project.

NextJS provides us with something called Server Side Rendering, which allows us to render data server side instead of whats normally done with React which is client side.

SSR (Server-Side Rendering) in Next.js is better for SEO (Search Engine Optimization) because it allows search engine crawlers to easily index the content of your website. This is because SSR generates the HTML on the server and sends the pre-rendered HTML page to the client, which can be easily crawled and indexed by search engine bots.

SEO is going to be a very fundamental marketing channel for your job board down the line, so its important to use a tech stack that suits the way you aim to get most of your site visitors in the future.

Creating a Database

Let’s set up our database really quickly. We’ll be using MongoDB for our database. Its pretty simple to use and set up. I’ll also recommend Supabase for beginners as well but for the sake of this tutorial, we’ll be using MongoDB.

Head over to this URL and create an account with MongoDB.

Now go ahead and follow the set up instructions to create your database using Atlas. You should end up on a page that looks like this, with your empty database created:

Now head over to Network Access and make sure that you allow you IP address and all IP address the ability to have access to this network. You should have 0.0.0.0/0 active with access if this is done correctly

Now back to database, click connect, and click MongoDB for VS Code. You’ll be given a URI string that looks something to the effect of:

mongodb+srv://<username>:<password>@{database_name}.krz3l.mongodb.net/

This should be saved into ur .env file in your VS Code labeled as DB_URL. You can replace the variables username, password, and database name with your respective values.

Within the database you’ve created, create a collection and name in Job. That’s all for now.

Sourcing Jobs

The first thing I recommend you thinking about is what type of niche jobs you’d like to source. Are you sourcing jobs for lawyers? Accountants? Maybe its a job board for NextJS developers? Or jobs in Singapore?

Whatever the case may be, its good to make clear the niche before anything else so we can go through the steps to find out where these jobs are being posted. There are three ways you can then source these jobs once you’ve determined your niche:

  1. Build a web scraper with Beautiful Soup or Axios / Cheerio
  2. API call’s for jobs hosted on ATS’s like Greenhouse / Lever / Workable
  3. Manually, with an excel whereby you write the relevant job information and then upload it to your DB

In the beginning, because I was optimizing for speed — I went with plan #3. Knowing what I know now, i’d recommend going with plan #2.

Applicant Tracking Systems are what a ton of companies host their jobs on. Popular ATS’s are as follows:

  1. Greenhouse
  2. Lever
  3. Workable
  4. Jobvite
  5. BambooHR
  6. Workday

All of these have job pages where companies host all of their open roles..

https://jobs.lever.co/fanatics brings you to this page. Perfect for web scraping

But some of them go a step further and allow you to make API calls to a URL in order to retrieve JSON about their open jobs…

You’ll notice the URL at the top is an API endpoint. The only difference between this URL for Harrys is the brand name in the URL. Swap out the brand name, get a different endpoint e.g. allbirds

In both cases, you’ll build scripts to retrieve job data that you can categorize.

Building a Python Script to Source From API Endpoints:

We’ll be using python for this file so you’ll need to go ahead and download Python or Python3.

To start, here are all my imports for this script:

Our goal is to create a python script that allows us to access the API endpoint to retrieve data.

Here, i’ll be including a {board_id} variable which will act as the company names. Later on, we’ll be creating a list of company names that have jobs hosted on Greenhouse (or any other ATS that has an API endpoint) and be looping through them to perform a GET request.

When you perform this request, you’ll get a JSON back. We need to parse through the JSON and store the data we need into variables we want. Here are the categories of data I collect for each job:

  1. job_role (Job titles such as Frontend Developer, Staff Accountant, etc..)
  2. job_category (Creative, Backend, Logistics, Product, etc…)
  3. job_type (remote, in-office, or hybrid)
  4. job_requirements (Requirements section for the job posts)
  5. job_description (Job description section)
  6. posted_at (date posted)
  7. Unix Timestamp (will need this for ranking by recency on Algolia)
  8. salaryMin
  9. salaryMax
  10. job_url

Depending on the endpoint data structure you pull from, your method of sorting the data will be different.

But overall, here is how I am grabbing each variable needed:

  1. Job Role: I’m searching for the label title which usually precedes the job title data
  2. Job Category: I’m searching for the label department which usually precedes the department
  3. Job Type: For a lot of the data, we’ll create our own functions to parse through the content they give us. Sometimes the wording may be different or the structure might be different. We want to handle all use cases. Here is what my determine_job_type function looks like for determining if the job is remote, hybrid, or in-office

4. Parsed Content: These API Endpoints wont particularly separate their content into a job requirements, job description, and company description section. They’ll just give you all of the content. Fortunately, its pretty easy to notice patterns. Most job content are in order of:

  1. Company Description
  2. Job Description
  3. Job Requirements

A simple function i’d recommend building is one such as the following:

4. Application URL: The simplest data point you’ll get. This usually comes exactly how you want it. You can store it in a variable application_url

5. Location: This gets a little tricky because there are many different location structures one job board might give you that differ from the rest. For example, US is different than USA is different than United States of America is different than California, USA. Here’s how I structured the function to parse through this:

The good news is that what I found in 99% of a cases is a structure of city, state, country or city, country. Therefore, I’d take the first word in the location data point and categorize that as city. Like I said, this is 99% right but not 100%. You’ll have edge cases.

Then, I took the whole location and used the Geolocator package to upload the location as an address and have the package find me the country name for the address.

It would spit back different names for the same country, which I then created an if elif statement to parse through and label as a two letter initials for each country

6. Posted At & Unix Timestamp: This is similar to the application url whereby the data comes exactly how you’d want it with a little caveat, dates can be expressed differently based on location. Some do mm/dd/yy (jobs in america), others do dd/mm/yy (jobs in asia). Almost all of these will also come back with the timestamp.

It’s important you structure your date as mm/dd/yy for the purpose of this tutorial. You’ll also be using the timestamp to get the unix timestamp. Unix timestamp format is particularly popular in computing. It measures time by the number of seconds that have elapsed since 00:00:00 UTC on 1 January 1970, the Unix epoch, without adjustments made due to leap seconds. Here’s how my functions for converting to mm/dd/yy and getting the Unix timestamp look:

7. Salary Ranges: I found this to be the trickiest data to parse through. Reason being is because there are a lot of different ways to display salary ranges. Think of the currency symbol, the inclusion of a comma or a period, the amount of 0’s, the min available without a max or vice versa, etc. The simplest way to deal with this has been this formula:

With the salary_range variable, i’m using the“re” module (which stands for regular expressions) to search for a pattern within the “content” variable.

The pattern is defined as follows:

  • $ : a dollar sign
  • [\d,]+ : one or more digits or commas
  • -? : an optional dash (-)
  • $? : an optional dollar sign
  • [\d,]* : zero or more digits or commas

This pattern is used to find strings that represent a salary range, like “$50,000-$70,000” or “$80,000”.

The “findall” function of the “re” module returns all non-overlapping matches of the pattern in the “content” variable as a list.

So, the variable “salary_range” will have a list with all the salary ranges found in the “content” variable, including the dollar signs ($) and commas (,).

Then it takes the first value of the range (which is the value before the dash) and removes the dollar sign ($) and commas (,). This value symbolizes the minimum salary.

After that, it takes the second value of the range (which is the value after the dash) and removes the dollar sign ($) and commas (,). This value symbolizes the maximum salary.

Finally, it creates a list with these two values without the dollar sign ($) and commas (,).

So, the variable “salary_values” will have a list with two numbers that represent the minimum and maximum salaries in the original range, without any symbols.

After that it just categorizing based on what I have. If I have two salary amounts, the first one will be the minimum and the second will be the max. If I only have one, ill store that as max. Else, I’ll store both values as 0.

Later on when you display the salary of your jobs, you’ll need to indicate what to display or what not to display. For my logic, I only display jobs that have a number higher than 10,000 and also aren’t 0 for the min or max.

8. Job URL: With the data I have, I want to create a custom URL for each job that i’ll use to make a GET request to display my jobs later on. The reason I do this is to add key information like the job title or the job type into the URL for SEO purposes.

Lastly, i’ll store each value into the corresponding variable within our formatted_entry array:

My __main__ function just loops through different company names inputted into the API endpoint, sourcing the data, and then checking if exists in the database before pushing the data into the database.

Two things to highlight here are the check_job_exists_in_db function and the insert_job_in_db function. If you remember when we created a database and stored our DB_URL in the .env file — this is when we’ll be using it. At the top of my script, i’m pulling the ability to access MongoDB from my pymongo import using the MongoClient.

Then, I create the two simple functions as follows:

I’m going to check if application_url already exists in the Job collection. I use application_url because it’s a unique identifier. If I used job title or company name to check, there will be duplicates of this entry.

Lastly, if it does not exist, I insert the job data into the collection.

This method of using API endpoint’s to source data works for a lot of the different ATS’s out there. I recommend to research to see which ones have APIs that you can call. With the ones that do have an endpoint, you can build out this script but just replace the URL for greenhouse with your ATS. I’ve done that exact thing with Lever job boards as well:

Phase #2 ~ Creating An API for Structuring + Displaying Job Data with Prisma:

Now that we have a way to source the jobs and store them in a database, we need to be able to read that data & display it. Enter in APIs!

Let’s head to the Pages section of our NextJS application. In this section, you’ll see already a built out folder labelled API with the hello.ts file in it. Create a jobs.js file.

If you’ve never dealt with API’s, this is how it’ll look when you use it to fetch data. We are basically creating endpoints that we can use to either GET, POST, UPDATE, or DELETE data.

How it’ll look if you ever want to make these type of requests using the fetch method

Here is how you jobs.js should look:

import {
getAllJobs,
createJob,
getJobByJobUrl,
getJobByJobID,
updateJob,
deleteJob,
getJobsByCompanyUrl,
getJobByDate
} from "../../prisma/job";


export default async function handler(req, res) {
try {
switch (req.method) {
case "POST": {
// Create a new job
const body = JSON.parse(req.body);
const job = await createJob(body);
return res.json(job);
}
case "GET": {
// Get a single job if id is provided is the query
// api/jobs?id=1
const query = req.query;
if (query.jobUrl) {
const job = await getJobByJobUrl(query.jobUrl);
return res.json(job[0]);
}

// api/jobs?id=1
if (query.id) {
const job = await getJobByJobID(query.id);
return res.json(job[0]);
}


// Otherwise, fetch all jobs
const jobs = await getAllJobs();
return res.json(jobs);
}
case "PUT": {
const body = JSON.parse(req.body)
const query = req.query;
// Update All Jobs
if (query.updateAll) {
const jobs = await getAllJobs();
jobs.forEach(async (job) => {
if (!job?.jobUrl) {
const jobUrl = generateJobUrl(
job.company_name,
job.job_position,
job.job_type
);

const { id, ...otherDetails } = job;
await updateJob(id, {
...otherDetails,
jobUrl,
});
}
});
return res.json({ updated: true });
}
// http://localhost:3000/api/jobs?updateAll=true

// Update an existing job
const { id, ...updateData } = body;
const user = await updateJob(id, updateData);
return res.json(user);
}
case "DELETE": {
// Delete an existing user
const body = JSON.parse(req.body);
const { id } = body;
const job = await deleteJob(id);
return res.json(job);
}
default: {
return res.status(405).json({ message: "Method Not Allowed" });
}
}
} catch (error) {
return res.status(500).json({ ...error, message: error.message });
}
}

Notice the top of this file has imports coming from a Prisma folder. Prisma is going to be what we build next

Using Prisma As Our ORM:

An ORM (Object-Relational Mapping) service like Prisma is needed to simplify the process of interacting with a database from an application. It provides a higher-level interface for working with a database, allowing developers to work with objects and code in a more natural way, rather than writing raw SQL queries.

ORMs like Prisma abstract away many of the complexities involved in database management, such as connection management, query generation, data validation, and data manipulation. This makes it easier to work with databases, particularly for developers who may not be experienced with SQL or database administration.

Because of this, we’ll be using Prisma to set up how we source our data within the API.

To get started, run:

npm install prisma --save-dev
npx prisma

This will import the Prisma package we need.

For more information on Prisma, you can head over to their docs here:
https://www.prisma.io/docs

Create a folder called prisma in your root folder.

In that folder, create a jobs.js and a schema.prisma file.

Set up your schema.prisma file as the following:

Does this data structure look familiar? Its the exact data values we built our Python script on. There’s only one caveat — the id.

The id is what will be auto-generated when you upload a job into your database. Its a unique value that can help us query the database

Structure your jobs.js file as the following:

Although all of these functions are important, we’ll mostly be focusing on getJobByJobUrl.

Remember when we created the job_url value in our python script? That value has the url of job type + job role + a numerical value. Here’s an example taken straight from ecomportal.co

This is a unique value, which means we can use it to source jobs from our database.

Phase #3 ~ Building Search + Display with Algolia:

Up to this point, we’ve created a way to source job data from various Applicant Tracking Systems. We also created an API and utilized an ORM to be able to manage that data. Lastly, we are using MongoDB to store that data in a database for us to use. Brilliant.

Now we have to upload all of these jobs to Algolia so that we can utilize search & the ability to display these jobs

Head over to www.algolia.com and create an account

A couple things you should do with Algolia:

  1. Create an index and name it
  2. Get the Algolia_App_ID key and store it in .env
  3. Get the Algolia_Admin_key and store it in .env

In your root folder, go ahead and create an algolia.py file. This file will host our script that will take our jobs from the database and upload them to Algolia.

Here is the code you can use in the algolia.py file:

from algoliasearch.search_client import SearchClient
import os
from dotenv import load_dotenv, find_dotenv
from pymongo import MongoClient

load_dotenv(find_dotenv())

algolia_app_id = os.getenv("ALGOLIA_APP_ID")
algolia_admin_key = os.getenv("ALGOLIA_ADMIN_KEY")
db_url = os.getenv("DB_URL")


algolia_client = SearchClient.create(algolia_app_id, algolia_admin_key)
algolia_index = algolia_client.init_index('ecomjobs_index')



connection_string = db_url

# Create MongoDB Client
mongo_client = MongoClient(connection_string, tlsAllowInvalidCertificates=True)
# Get database instance
mongo_database = mongo_client[db_name]
# Get collection instance
mongo_collection = mongo_database[collection_name]

# Retrieve the first 5000 records from collection items
mongo_query = mongo_collection.find()
initial_items = []
new_items = []

for item in mongo_query:
if (len(initial_items) < 5000):
item['objectID'] = str(item.pop('_id'))

# Check if datets is not None before converting to int
datets = item.get('datets')
if datets is not None:
item['datets'] = int(datets)


# Convert salaryMin and salaryMax to integers
if 'salaryMin' in item and item['salaryMin'] is not None:
if ',' in item['salaryMin']:
item['salaryMin'] = int(item['salaryMin'].replace(',', ''))
elif '.' in item['salaryMin']:
item['salaryMin'] = int(float(item['salaryMin']))
else:
item['salaryMin'] = int(item['salaryMin'])
if 'salaryMax' in item and item['salaryMax'] is not None:
if ',' in item['salaryMax']:
item['salaryMax'] = int(item['salaryMax'].replace(',', ''))
elif '.' in item['salaryMax']:
item['salaryMax'] = int(float(item['salaryMax']))
else:
item['salaryMax'] = int(item['salaryMax'])


initial_items.append(item)

# Fetch existing objectIDs from the Algolia index
existing_appURLs = [hit['application_url'] for hit in algolia_index.browse_objects({'attributesToRetrieve': ['application_url']})]

# Filter new_items based on existing_objectIDs
new_items = [item for item in initial_items if item['application_url'] not in existing_appURLs]

# Print out the size of our initial_items and new_items arrays
print("Initial items:", len(initial_items))
print("New items:", len(new_items))

# Upload the new_items list to your Algolia index
if new_items:
response = algolia_index.save_objects(new_items)
print(response)
else:
print("No new items to upload.")

Let’s test out our scripts now. For my ATS file, i’ve named it greenhouseScrape.py. My algolia file is named algolia.py. In terminal, i’ll run:

python greenhouseScrape.py
python algolia.py

Run the greenhouse script first and let that finish. Make sure to include some company names in your board_id array so that it has something to loop through. Make sure you also check that those company names have API endpoints for ATSs that actually return JSON data.

You should now be seeing data stored and displayed in Algolia:

Phase #4 ~ Building The Landing Page UI:

A simple UI for a job board will allow users to:

  1. Search through jobs
  2. See jobs displayed and be able to click them to be taken to a job page
  3. Post a job

There are many other additions you can add such as the ability to filter based on salary, country, job type. We won’t be covering that here but if you are interested in it, just let me know and I can build out a follow up whereby we include more advanced features.

The Main Page:

Head over to your index.ts file. This is the file whereby you’ll be creating the landing page. Here’s what mine looks like:

    <div>
<div>
<div className="relative font-montserrant">
<FeaturedBrands/>

</div>
<div className="flex flex-row -mt-2 justify-between items-start px-7 xl:px-10 2xl:px-32 gap-6 mb-5">
<InstantSearch searchClient={searchClient} indexName="ecomjobs_index">
<Configure hitsPerPage={10} />


<div className="max-w-xs w-full hidden lg:block">
<Filter
clearFilter={clearFilter}
setClearFilter={setClearFilter}
/>
</div>

<div className="flex font-montserrant flex-col w-full gap-4">

<div className="p-2 lg:p-4 searchBox -mb-4 lg:mb-0.5 -mx-7 lg:-mx-0 flex flex-row justify-center items-center gap-3 ">
{/* Custom Search Box */}
<div className="w-full">

<CustomSearchBox clearFilter={clearFilter}
setClearFilter={setClearFilter}
searchClient={searchClient} />
</div>
{/* Filter Button For Mobile Filter Open */}
<div className="h-12 self-end lg:hidden">
<button
className="h-full flex flex-row justify-center items-center gap-2 border border-lightGreen-300 rounded-md px-4 w-auto"
onClick={() => setFilterModelMobile(!filterModelMobile)}
>
<FilterIcon className="text-lightGreen-300" />
<span className="font-montserrant font-medium text-sm leading-30 text-lightGreen-300 hidden md:inline-block">
Filter
</span>
</button>
</div>
</div>
{/* View Data Section */}
<div className="-mx-6
mt-6 lg:-mx-0">
<InfiniteHits hitComponent={CompanyData} showPrevious={false} />
</div>
<div>
</div>
</div>

{/* Filter Model For Mobile View */}
<div
className={`lg:hidden filterModelAnimation bg-white w-full overflow-y-auto h-full py-4 fixed top-0 left-0 ${filterModelMobile ? "block" : "hidden"
}`}
>
<div className="flex justify-end items-center max-w-md mx-auto mb-5 pr-5">
<div>
<button
onClick={() => setFilterModelMobile(false)}
className="border border-lightGray-100 bg-white rounded-full p-1 shadow-md"
>
<Close2 />
</button>
</div>
</div>
<div>
<div className="max-w-xs mx-auto">
<Filter
clearFilter={clearFilter}
setClearFilter={setClearFilter}
/>
</div>
</div>
</div>
</InstantSearch>
</div>

</div>
</div>

There’s a couple things to highlight here:

  1. Featured Brands
  2. Instant Search
  3. Configure
  4. Infinite Hits

Featured Brands

Featured Brands is a component i’ve made that is the landing page banner for this site. I recommend you create one and include three main things:

  1. Aesthetic background that displays the qualities of your brand
  2. Engaging headline
  3. Email Submit

The email submission is quite important, and does require you to go through the same process we used to set up the job data in the database. You’ll need to create the API for it and the structure of the data in the schema.prisma file plus a email.js in the prisma folder outlining the functions used to query the data.

the email.js file in the prisma folder

Fortunately for emails, there really isn’t much data we need except the email

data structure in the schema.prisma file

Essentially, on subscribe, the data in the input should perform a post request to your database and be stored there. This is great for building an email list that you can contact & send job alerts to.

Instant Search

Instant Search is why we use Algolia to begin with. It allows you to create a search bar, display your data, and categorize the data in the backend. If we want to change what search terms can be used in the search bar to find jobs, that’s all done in Algolia. Same with the ranking of the search results.

The two props you’ll need to input is the search client and your index name.

For starters, you should import these packages:

import algoliasearch from "algoliasearch/lite";
import {
InstantSearch,
Configure,
InfiniteHits,
} from "react-instantsearch-hooks-web";

Then, you need to use algoliasearch to create a search client where you can store your keys

For more details on how to personalize Instant Search, create a search bar, etc… refer to the documentation that Algolia provides on it here:

https://www.algolia.com/doc/guides/building-search-ui/what-is-instantsearch/react/

Configure

There are now a couple components you can call in Instant Search to be able to configure your results.

Configure allows you many options, but the one I use the most is a prop called HitsPerPage that accepts a numerical amount of results you’d like to appear. I gave it the number 10 as a start.

More on this can be found here:

https://www.algolia.com/doc/api-reference/widgets/configure/react/

Infinite Hits

This is where all of the job data is going to be displayed. You’ll notice in this component what i’ve done is taken the hitComponent, which is where you’ll receive the job data, and i’ve sent to as a props to the component Company Data.

Company Data will now receive all of the job data as a prop. In the Company Data component, I can style and lay out all the data to my liking

Phase #5 ~ Building The Job Search + Job Page UI

Go to your pages folder, and create a folder called components

In the components folder, create a companyData.js file. In this file is where i’m doing all the styling that the user will see upon landing on the page.

const CompanyData = ({ hit }) => {

return (
!session ?
<>
<div
className={` mx-4 rounded-lg mb-4 lg:mb-7 border border-[#2c4f43] lg:rounded-lg py-4 pl-4 hover:bg-[#dbd7d4] ${hit?.featured ? "bg-amber-200" : "bg-[#edebea]"
}`}
>
<div className="z-0" onClick={goToJob}>
<div className="flex flex-col gap-3">
<div className="flex flex-row justify-between">
<div className="flex flex-row items-center gap-3 md:gap-4">
{hit?.logo && (
<div className="self-start lg:self-center">

<img
src={hit?.logo.startsWith("data:image/") ? hit?.logo : `https://ecomportal-images.storage.googleapis.com/images/${hit?.logo}`}
alt=""
className="w-14 h-14 min-w-[56px] min-h-[56px] border border-lightGray-200 rounded-lg"
/>

</div>
)}
<div>
<div className="flex flex-col font-montserrant md:flex-row items-center gap-1 lg:gap-3 xl:gap-6">
<div>
{hit?.job_position && (
<h2 className="text-black md:text-lg leading-5 md:leading-6 lg:!leading-30 font-montserrant tracking-common font-medium">
<a href={`/job/${hit?.jobUrl}`}>{hit.job_position}</a>
</h2>
)}
<div className="flex flex-wrap flex-row items-center justify-start gap-2 lg:gap-3">
{hit?.emp_count && (
<p className="flex flex-row items-center gap-2 ">
<UserIcon />
<span className="font-montserrant font-normal text-sm text-black tracking-common opacity-60 leading-6 lg:!leading-30">
{hit.emp_count <= 100 && '1 - 100 Employees'}
{hit.emp_count > 100 && hit.emp_count <= 500 && '100 - 500 Employees'}
{hit.emp_count > 500 && hit.emp_count <= 2000 && '500 - 2000 Employees'}
{hit.emp_count > 2000 && hit.emp_count <= 5000 && '2000 - 5000 Employees'}
{hit.emp_count > 5000 && '5000+ Employees'}
</span>
</p>
)}
{(hit?.salaryMin && hit?.salaryMax !== "null") || 0 ? (
<p className="flex flex-row items-center gap-2 ">
<SalaryIcon />
<span className="font-montserrant font-normal text-sm text-black tracking-common opacity-60 leading-6 lg:!leading-30">
{hit?.salaryMin && hit?.salaryMax && hit.salaryMin !== "0"
? hit.salaryMin === hit.salaryMax
? hit.salaryMax >= 10000
? `$${hit.salaryMax}/yr`
: ""
: hit.salaryMin >= 10000 && hit.salaryMax >= 10000
? `$${hit.salaryMin} - $${hit.salaryMax}`
: hit.salaryMin >= 10000
? `$${hit.salaryMin} -`
: hit.salaryMax >= 10000
? ` - $${hit.salaryMax}`
: ""

..........

You can see here i’m receiving a hit object which will be an object filled with all of the job data for a single job stored in Algolia. With that object, i’m taking values like hit.job_position or hit.logo and i’m styling it accordingly.

You can see for each job i’m displaying the job name, the days since it was posted, the job type, the category, the location, and the entire job is wrapped in a link tag with the application_url

On click, your users will be sent to the application url on greenhouse where they can apply for this job directly.

I’ve created an intermediary page, the job page, whereby I display information such as the job description and requirements we sourced before the user can learn a little more about the job before committing to apply to it on the brands site

To do this, i’m using something called Dynamic Routing — a great feature of NextJS

Dynamic Routes allow you to define pages that have dynamic URLs based on the contents of a folder. To create a dynamic route, you would create a file in a folder with square brackets in its name (e.g. [id].js), where the contents inside the square brackets represent a dynamic parameter in the URL.

For example, if you have a file named [slug].js in a folder called pages/posts, you can access the page using a dynamic URL like http://example.com/posts/hello-world. The hello-world part of the URL will be passed to the [slug].js file as a query parameter, which you can then use to fetch data or render the appropriate content on the page.

Dynamic Routes can also be nested, allowing for more complex URL structures. For more information on how to use Dynamic Routes in Next.js, you can refer to the official documentation: https://nextjs.org/docs/routing/dynamic-routes.

Here is what my job page looks like. Firstly, i’m doing a getServerSideProps call to the jobs API in order to fetch job data. In this case, i’m fetching the exact jobId that was clicked on.

Then i’m recieving jobs as props and displaying that information much like we did in the CompanyData component:

Here’s the result:

This page displays the job description, job requirements, title, location, job type, category, posted at, and the apply now link will send users to the final ATS page

Conclusion

My goal for this tutorial was to give you the bare bones for building a simple niche job board. I had to figure this all out myself and see what works best, so I wanted to make sure you could skip all those steps and get right into building.

This can be multiplied into so many different niches and sub niches. I love Japan Dev as an example because you would think tech jobs in Japan is such a niche idea. Well, the founder makes over $60,000/mo running this business!

Hopefully this provided you some direction to build your job board in. If you have any questions, feel free to contact me:

Twitter: @smerlinger

And if you enjoyed this tutorial, shoot me a follow on my channel:

Youtube: www.youtube.com/c/shaunsmerling

All the best!

--

--