Modeling Starlink capacity

Mike Puchol
19 min readOct 2, 2022

It has been 18 months since I launched starlink.sx, a personal project aimed primarily at increasing my understanding of how SpaceX’s Starlink constellation worked, and how it could be applied to an environment such as Kenya, where my startup, Poa Internet, offers affordable broadband to underprivileged communities, using Fixed Wireless Access (FWA). Given the general lack of good fiber infrastructure in most of Africa, using Starlink as a backhaul method for FWA distribution seemed credible.

starlink.sx v0.1 pre-pre-alpha

While I expected this to be a short 2–3 month project, based on my Loon tracker, it soon became apparent that the bodies of knowledge involved in placing a Low Earth Orbit (LEO) Non-GeoStationary Orbit (NGSO) constellation into operation are many, and not just in aerospace engineering, orbital mechanics, or wireless communications, but also legal, financial, and regulatory, to name a few. I was in for a wild ride.

Today, I have released v2.0 of the site, with the first stab at simulating potential service capacity in a country of choice.

Big, fat, and bold caveats to this article

The following are the main caveats placed around this initial release. Some may be cured in future updates, others will remain, as they depend on non-public information, which SpaceX may never disclose.

  • I explicitly do not authorize any organization or entity to cite this article, or the results obtained by the simulation, for inclusion in filings with any regulatory body under any jurisdiction. If you need an example of what not to do, here is Jonathan McDowell’s letter to the FCC to correct the misinterpretations of his work in filings made by Viasat against SpaceX. I do welcome questions, suggestions, feedback, and input from everyone, as always.
  • The simulation is squarely aimed at learning how different techniques (which I will describe below) can impact the potential for service delivery. It is not meant to offer a qualified opinion on the actual service potential offered by Starlink today, or its potential for future change, either for better or worse.
  • Some results will be very optimistic, others very pessimistic, compared to reality. An optimistic result around capacity is that I model only one country at a time, whereas in reality, satellite resources will have to be shared with neighboring countries. The pessimistic correction is the fact that Starlink could be delivering service from in-orbit spare satellites, which I don’t use, instead of keeping them as “dry powder” to replace failures.
  • The simulation is focused on downlink capacity only.
  • The calculations required to run a single snapshot can become massive, taking up considerable CPU resources from your computer, and locking the browser for a long time. A large country with 30k cells, served by 150 satellites, each with 48 spot beams, could require up to 216 million calculations.

A brief intoduction to Starlink’s capabilities

In order to to simulate the capacity and service potential of the Starlink constellation, we need to understand what the capabilities of the satellites and ground segment are. Below is a simplified diagram of the system’s components.

Starting on the customer side, Starlink provides user terminals (UT), commonly known as Dishy, which are Electronically Steered Antennas (ESA) with mechanical azimuth/elevation adjustment motors. The UT uses phased array technology to form a transmit (uplink) and receive (downlink) beam in Ku band (12–14 GHz) focused on the satellite providing service to the cell where it is located. In order to split territories, Starlink uses Uber’s H3 hexagonal cell system, which you can view in action here. The satellites, in turn, also use ESAs to project a spot beam onto a cell, and pass the traffic onwards towards a gateway (GW) using two parabolic Ka band (27–40 GHz) gimbaled antennas. Each gateway site typically features nine antennas, in 3x3, 4x5, or 1x9 configuration.

A typical 3x3 configuration in Turkey. Credit Google Maps.

Eight antennas are active, and one is left as standby/spare. A gateway can thus fully serve four satellites. The gateways in turn connect to Points of Presence (POPs) over high-capacity fiber, where the traffic is handed off to Internet backbones.

Satellite capacity

From publicly available information, such as FCC filings, AMAs, and other articles, we know that each satellite features four ESAs in Ku band, one for uplink, three for downlink, with each antenna capable of projecting eight beams in two polarizations (RHCP/LHCP), for a total 48 downlink beams and 16 uplink beams. This results in a 75/25 downlink/uplink split ratio. The maximum bandwidth available to Starlink in Ku band is 8x 250 MHz channels in downlink (total 2 GHz), and 8x 62.5 MHz channels in uplink (total 500 MHz).

Credit: SpaceX https://starlink.com/technology

Gateway capacity

The two Ka band parabolic antennas combined can provide a throughput of around 20 Gbps when connected to a gateway. Each gateway antenna has available a maximum of 4x 500 MHz channels (total 2 GHz) in uplink, and 5x 250 MHz channels (total 1.25 GHz) in downlink.

A typical gateway antenna. Credit /u/diadumenianus via Reddit.

Optical Inter-satellite links (ISL)

A recent addition to Starlink’s satellites, starting with v1.5 in June 2021, has been optical inter-satellite links. Each satellite features three optical heads, using infrared lasers to communicate with other satellites in the same plane, and across planes, as shown below.

In the above scenario, one satellite is connected to a gateway. The rest of the satellites do not have direct gateway connectivity, but use in-plane and cross-plane ISL to relay traffic, and are thus able to serve UTs that would otherwise be out of coverage. Starlink has posted an image of what these ISL heads look like:

Credit: SpaceX https://starlink.com/technology

Caveat: ISL is not a panacea, you still need to offload the traffic somewhere. Thus, if a single plane with 20 satellites shares one gateway, every satellite would have a balanced capacity of 5% of what a standalone satellite would have.

Prediction: Starlink, and other large NGSO constellations, will eventually have to migrate to optical gateways in order to provide the capacity required. There is only so much spectrum available in Ka and V/E band, and the terahertz gap looms.

Constraints and limitations

There are some limitations, imposed by the regulators, which reduce the service levels the Starlink system can provide.

Frequency re-use and co-frequency spot beams

Any satellite emitting RF energy towards Earth must comply with power limits as received on the ground, measured as Equivalent Power Flux Density (EPFD), set primarily by the ITU. The power from spot beams whose footprints overlap, and share the same frequency, is additive, thus, to comply with the limits, the transmit power of each beam must be reduced. Starlink is forced to use Nco=1 (only one co-frequency beam on a single cell), which effectively means only eight spot beams could be projected simultaneously onto a single cell, with no overlap of same-frequency beams. This has a considerable impact on frequency re-use and system capacity.

Caveat: the simulation doesn’t take Nco=1 into account at this time, as the complexity of calculations increases dramatically. It can be assumed that Starlink’s scheduler will ensure the restrictions are enforced.

Gateway spectrum availability

In certain areas, the Ka band spectrum available to gateways is reduced to half, due to priority use by other licensees. This effectively halves the throughput each of these gateways can provide to the connected satellites. The simulation does take this issue into account, and applies the corrected available throughput to connected satellites — where data is publicly available from regulators. Otherwise, the gateways are assumed to have full capacity.

GSO protection

GeoSynchronous Orbit (GSO) operators with satellites in Geostationary Equatorial Orbits (GEO) have no capacity to move their satellites to avoid inline events, which could cause interference to customers of e.g. satellite TV, as they share the same Ku band spectrum slice with Starlink. Therefore, they are afforded priority protection against NGSO operators, who must cease emissions when inline with the GSO belt. The amount of protection afforded is related to the beamwidth, transmit power levels, and other factors, and in Starlink’s case, it began with ±16º, but was recently reduced to ±10º, which caused an instant increase in the regions that could be served. This is the reason why the UT tilts away from the GSO belt using the motors, in order to focus the antenna towards the area where it will be able to transmit towards the satellites.

Flat UT on the left, with a large unusable portion of its FOV. Tilt increases the usable elevation range.

A simplistic capacity simulation attempt

If we were to take a simple view of the capabilities of Starlink’s constellation, we would know that each satellite has a fooprint of ~1,800 km in diameter, 48 spot beams in downlink, and the corresponding gateway backhaul. There are approximately (varies with altitude) 10,000 H3 size 5 cells under a satellite’s Field Of Regard (FOR). In the image below, those affected by GSO protection are shown blanked, with some 8,000 cells remaining.

We can immediately question how a single satellite, with 48 downlink beams available, could possibly serve up to 8,000 cells — it would need to split ~20 Gbps of capacity, resulting in only 2.5 Mbps per cell, assuming it could move the beams around so quickly. The immediate answer is satellite overlap, which can result in a single cell being within the FOR of 12–14 satellites at any given time. Let’s see how a simulation of 1 beam per cell, dedicated 100% of the time to that cell, would look like:

The grey cells represent those that didn’t get any service at all. Those cells that got satellite resources tasked are shaded from green to red, according to how far above or below the average capacity they got. Yellow would mean the cell was receiving exactly the average capacity amongst all serviced cells. If we zoom in, we can see how these two satellites were serving 48 cells each, with a beam capacity of 417 Mbps:

In principle, any cell that is served should be yellow, as the satellites are fully dedicating a spot beam to each cell — what is going on with some of the cells being red? These are being served by a satellite that doesn’t have a direct gateway link, but is instead linked over ISL:

In these cases, the simulator assumes the satellite to have 10% of the normal capacity it would have under full gateway coverage. Thus, every cell is only receiving 42 Mbps, is well below average, and is thus red.

The final results are somewhat depressing:

Of almost 37k cells, we could only service 6k, or 16% of the total.

Caveat: many cells in Alaska are not covered no matter what we do, due to lack of satellite density in the high inclination shells. We are not yet able to achieve 100% of cell coverage for the simulation of the United States.

We used 131 satellites and 5,992 of their spot beams in the process. From the theoretical total capacity of the used satellites, we used 99%, or 2.3 Tbps. The maximum capacity a cell could achieve is 700 Mbps (250 MHz channel), and the average was almost 380 Mbps. At a Committed Information Rate (CIR) of 5 Mbps per UT, we could serve just over 450k terminals, or 75 terminals per cell.

Resource allocation mechanisms

In order to overcome the limitations of assigning a single spot beam per cell, and be able to cover more cells with the same number of satellites, we can introduce various methods. Some, such as Frequency Division Multiplexing (FDM) are not discussed in this context, as they do not allow an increase in cells covered.

Time Division Multiplexing (TDM)

Also know as “beam hopping” in the traditional satellite industry, TDM allows a beam to be tasked for a certain period of time on a cell, then for another period of time on a different cell, and so on. The duration of each time slot, multiplied by the channel capacity, defines the capacity allocated per cell. Allocating resources in this way results in Time Division Multiple Access (TDMA), a method for multiple entities to share a common resource, by slicing in the time domain. For example, if a 700 Mbps spot beam was allocated 10% of the time to each cell, it could serve 10 cells, at 70 Mbps each.

Beam spread

The contour of a Starlink spot beam at nadir is almost circular, and covers just over one H3 cell. As you steer the beam away from nadir, the contour becomes more elliptical. You can experiment this by shining a flashlight onto a wall at different angles:

Beam close to nadir.
Beam at large steering angle.

Plotted over hexagonal cells, these patterns would look something like this:

Beam A covers only one cell, whereas beam B has become highly elliptical at increased steering away from nadir, and covers the primary cell, but also four additional cells.

Technically, all the UTs inside the contour of the spot beam have enough link budget to communicate with the satellite, so the beam that has been “spread” over five cells is be able to service UTs in any of them. The resource scheduler can, however, decide what cells inside the beam’s FOR it will allow service to, and deny access to the rest, thus limiting the damage to CIR by over-spreading of the beam. At maximum steering angle, a beam can cover up to 30 cells — our 700 Mbps beam could only allocate ~23 Mbps per cell on average.

Predictive cell demand

Before Earth Stations In Motion (ESIM) use is allowed by the regulator, UTs must be stationary in to receive service. Albeit early tests showed in-motion use was working, SpaceX has disabled this in recent firmware updates. Having UTs assigned to a particular cell, and a small subset able to roam using the RV service, it is relatively easy for the scheduler to determine what capacity must be allocated to what cell in order to honor the CIR. Specific events such as Burning Man could result in large volumes of RV users suddenly appearing in an unexpected location, but the scheduler would detect this, and increase the resources assigned to the area. The simulator addresses this by using population density as a proxy for UT density. Using the awesome work by Kontur and their Global Population Density binned into H3 cells, I aggregated the dataset into size 5 H3 cells, and exposed the population of each cell to the simulation. We can select anywhere from 1 person per cell to unlimited density, in order to adjust the number of candidate cells. If we adjusted the simulation to cover only cells containing between 1 and 1,000 people, those most sparsely populated, we would only need to cover ~16k cells:

It is evident where most “empty space” is. While we have cut the number of candidate cells by 20k, the remaining set are clustered, and most cells (67%) still go without service. Increasing satellite density would obviously improve things — we must bear in mind that the complete Gen1 constellation comprises 4,400 satellites, whereas only ~2,000 are in operation right now.

Caveat: we have only tasked one beam per cell until now, whereas we know for sure there are at least a primary and a backup beam on every cell, and possibly more if demand warrants it.

Complex simulations

We can use the methods described above to see their impact on capacity and service coverage. Let’s run through them step by step.

Simulation with TDM only

If we ran the first simulation on all cells again, but assigned a 25% TDM split (thus, every cell gets 25% of a beam’s capacity), and two beams per cell, the result becomes quite different:

We are now able to cover 35% of cells, up from 16%, are still able to serve close to 460k UTs at 5 Mbps CIR, and use 98% of satellite capacity. Average cell capacity has however dropped from ~380 Mbps to ~180 Mbps. Zooming in, we can see how one specific cell is served by two dedicated beams from two satellites, each contributing 104 Mbps of capacity:

If we increased TDM split to 10%, we could cover 71% of cells with 87 Mbps each, and still serve the same number of UTs with the same total capacity utilized:

Simulation with beam spread only

If we revert to 100% duty cycle per cell, but enable beam spread to 5 cells, we increase serviced cells to 41%, while using 95% of satellite capacity, and serving, again, ~460k UTs. Logically, average capacity per cell has dropped to ~153 Mbps.

If we increase beam spread to 10 cells, we increase covered cells to 52%, at the expense of reduced satellite capacity usage, and average cell capacity of ~113 Mbps.

The cell below is served by one primary beam (green) providing 208 Mbps of capacity, and one “piece of 10” beam (pink) providing 60 Mpbs.

As with TDM, if we turn the dial to 11, and allow unlimited beam spread, we can serve 62% of cells with 85 Mbps, but our overall utilized capacity decreases to 1.9 Tbps, or 89%, and we could only serve ~388k UTs — we are not sweating our assets as much.

It would appear that TDM is the winner, if we had to pick only one method, as we lose almost 16% of UTs with beam spread.

Simulation with TDM and beam spread combined

As usual in many cases, the answer probably lies somewhere in the middle. Let’s start with 50% TDM and 5 cell beam spread, and see what happens:

Truly in the middle, but not the way we expected. We cover 67% of cells, at 88 Mbps, and utilize 92% of our capacity, to serve ~430k UTs.

You may by now be asking yourself: “why are the covered cells clustered so much?”. The answer is an additional simulation setting: priority of cells close to nadir over cells at maximum slant. In all the simulations so far, we have used nadir priority, meaning the simulator would task resources to cells close to the satellite’s nadir, and move outward. By changing the priority to maximum slant, the process begins with cells at the edge of the satellite’s coverage. If you remember the earlier diagram of two spot beams, the larger the steering angle and slant range, the more elliptical the beam FOR, and the more cells it covers.

Caveat: even if we allow for unlimited beam spread, with beams close to nadir, the FOR will only be spread across 2–3 cells maximum. To see the real effect of beam spread, prioritize for maximum slant.

Let’s re-run the simulation but with maximum slant priority:

What a change! 74% of cells covered, 94% of capacity used, 75 Mbps per cell, and ~412k UTs served, or 15 UTs per cell. If we increase beam spread to unlimited, we are able to cover almost all cells, but at a penalty to cell throughput, which drops to 49 Mpbs, and served UTs dropping to ~310k.

You may also notice that we are only now using 85% of satellite capacity, and 3632 beams across 105 satellites, or ~35 beams per satellite. On a per-satellite basis, we are “wasting” 13 spot beams. What can we do about this? Let’s try to increase the number of beams per cell to six, and set TDM to 25%:

We are covering 87% of cells, with 71 Mbps on average, and servicing ~450k UTs, while using 87% of our satellite capacity.

Adjusting for population density

A H3 cell has an average surface area of 252 km². According to a study on Starlink and RDOF prepared by Cartesian for the Fiber Broadband Association and the Rural Broadband Association, 88.3% of RDOF locations won by Starlink are rural, defined as those with a population density of less than 500 people per square mile. One square mile equals 2.59 km², so a rural cell would be that with less than 48,650 people in it. If we configure the simulation to ignore cells with less than 10 people, and those with over 49,000 people, leaving all other settings unchanged, we are able to serve the same number of UTs, but at a higher average rate per cell, 84 Mbps.

One interesting pattern starts to appear. Compare the above result with Starlink’s official coverage map:

Notice how a large portion of the South and Midwest shows as “Expanding in 2023”, and how it matches the result of our latest simulation run? The primary reason for this is not satellite density — you can run the simulation any time of the day and get the same result — but the fact that many gateways in that region operate at 50% capacity, due to spectrum priority usage by a prior licensee. The only options SpaceX has to fix this are to increase the number of gateways that can serve the region, move to other spectrum such as V/E bands, or go optical on the space-to-ground segment.

The capacity simulator explained

Simulation settings

Click on the ‘Capacity simulation’ icon in the toolbar, which brings up the simulation settings window:

While choosing a country, you will be warned if the number of cells contained in it can potentially result in long simulation runs. These can take over one minute to process, depending on the rest of the settings. The more you spread resources, e.g. higher TDM split, or wider beam spread, the more calculations that are required.

  • Draw spot beam links: shows a thin green line between each satellite and the primary cells it covers. When using TDM, it will show as many beams as TDM allocations, e.g. at 10%, each satellite projects 480 lines.
  • Simulation mode: Priority nadir will begin assigning resources closer to the satellite’s sub-point, max slant, to those at the edge of coverage.
  • Beam spread: over how many cells in the beam’s FOR the beam can be spread. Unlimited will spread over however many cells fall inside the FOR.
  • Provisioned rate per UT: this sets the CIR which is in turn used to calculate how many UTs can be served.
  • Available 250 MHz channels: this setting has not yet been implemented, and has no effect on the simulator.
  • TDM allocation per cell: how many times to split each beam in time domain. 50% would correspond to 50/50 split.
  • Beams per cell: maximum beams allocated per cell. The simulator will attempt to reach this value, but once the satellites run out of capacity, cells will go with fewer beams, reflecting in the final result.
  • Cell population range: can be set between no people and unlimited, to constrain the number of cells used in the simulation.

Exploring the simulation results

The first item to go over is the simulation satistics:

After the simulation is run, we will see how many cells were used, how many were covered by at least one beam, how many satellites had resources used, and how many spot beams in total were assigned.

The next line shows the total capacity used, by adding up every beam allocated, and as a percentage of the theoretical maximum based on the number of satellites used, and their individual capacities added up. Maximum cell capacity is the theoretical maximum a single cell could receive based on settings, and the final average capacity, that across all cells that were served. Number of terminals is based on total capacity divided by the configured CIR.

If we zoom close enough, individual cell edges are shown, and the tooltip indicates the number of beams assigned, cell capacity, and population:

If we click on any cell, we can see which beams contributed to the overall cell capacity, and from which satellites. Green lines show primary beam allocation, and ping lines allocation from another beam being spread to the cell:

When cells are served by ISL satellites, the beam capacity is reduced to 10% of what it would normally be. If we zoom into the Aleutians, we can see how one cell was served by three ISL satellites, each contributing multiple beams to the cell:

The 15 people living in this cell are really lucky!

Exporting the simulation data

Once the simulation has been generated, you can export the data in CSV format by using the Export button. You will receive a file containing all cells, ordered by population count, and the simulated data per cell:

Other interesting simulations

Brazil

This doesn’t match Starlink’s coverage map too, well, which could be explained by certain gateways not being operational yet:

By turning off gateways (right click, then select ‘Disable’), we can simulate the effects on potential capacity:

Kenya

By using ISL satellites alone, 1,200 UTs could be served at 5 Mbps, using the full capacity of two satellites. Only 65% of cells could be covered.

Known issues

  • It seems like the cells for Australia were not correctly pre-computed, so simulation doesn’t work there right now.
  • The simulator may sometimes not generate results, in this case, reload the page.
  • Each simulation run is synchronous, thus, will lock the browser for extended periods, resulting in “Wait or Exit” warnings in some browsers. It is safe to click “wait” in these cases. Change the “Simulation pause” timer in General Settings to give breathing space to hit the pause button or change other settings.
  • This is the first release, other issues and bugs are expected — please report them!

--

--