Knowledgebase

Stellar Cloud and Stellar Pro Cloud Resource policy  Print this Article

Stellar and Stellar Pro Cloud Resource policy

CPUs

Central Processing Unit configuration used in our cloud systems is Dual AMD Epyc 7401 for each host node
Difference between v(irtual)CPU and d(edicated)CPU is the core usage guarantee.

vCPU means maximum 100% of 1x AMD Epyc 7401 Core and minimum 50% expressed by formula:
vCPU := 100% <= (1 x hostCPU) <= 50%

dCPU means 100% of dedicated 1x AMD Epyc 7401 Core expressed by formula:
pCPU := 1 x hostCPU

hostCPU is 1 Core of 1 (one) AMD Epyc 7401 processor

RAM

We use DDR4 ECC Registered RAM with at least 2666Mhz rate for all types of RAM allocation
There is no difference in RAM allocation for any plan due to nature of Virtualization.

vRAM or pRAM Memory is provided by virtual allocation with direct provision of used blocks of host physical RAM:
1MiB vRAM | 1MiB pRAM := 1MiB hostRAM - zeroBytes

hostRAM is physical system ram presented by hypervisor host system
zeroBytes is bytes not containing any data, means allocated but empty memory pages

Both vRAM and pRAM are virtually allocated on the host system and presented to instance by reference through hypervisor.

SPACE

vNVME, vSSD or vHDD space imposes different limits in size, iops and bandwidth on the vBLOCK device.

In the plan listing we only specify the interface without specifying that it is a block device, because interface implies using a block device.
This means vBLOCK applies to any Stellar Cloud plan and pBLOCK applies to any Stellar Cloud Pro plan:

vBLOCK Device space is provided by virtual allocation with direct provision of used blocks of host LVM volumes:
1MiB vBlock := 1MiB hostBlock - zeroBytes

pBLOCK Device space is provided by direct allocation of host LVM volumes:
1MiB pBlock := 1MiB hostBlock

vBlock devices = any interfaces prepended with v
pBlock devices = any interfaces prepended with p

Limits and QoS are imposed on all types of IO operations: Sequential Write, Sequential Read, Random Write, Random Read, Mixed.
Interfaces limits in detail:

vNVME := 1200000 <= IOPS <= 1000
vNVME := 15000MB/s <= IOBW <= 75MB/s

vSSD := 120000 <= IOPS <= 100
vSSD := 2500MB/s <= IOBW <= 7MB/s

vHDD := 12000 <= IOPS <= 10
vHDD := 750MB/s <= IOBW <= 0.7MB/s

pNVME := 1200000 <= IOPS <= 2500
pNVME := 15000MB/s <= IOBW <= 155MB/s

pSSD := 120000 <= IOPS <= 250
pSSD := 2500MB/s <= IOBW <= 25MB/s

pHDD := 12000 <= IOPS <= 25
pHDD := 750MB/s <= IOBW <= 7MB/s

NVMe Devices used but not limited to Intel P4510.
SSD Devices used but not limited to Micron 1100.
HDD Devices used but not limited to Hitachi Ultrastar.

IOPS = input output operations per second
IOBW = input output bandwidth in MB/s

NETWORK

We use Dual 10Gbit/s bonding uplink to each host node. This section describes only the virtualized devices and their share of uplink resources.

vNET := IF NTB > 0 THEN 1Gbit/s duplex <= vBW <= 10Mbit/s duplex ELSE vBW = 10Mbit/s duplex
pNET := 10Gbit/s duplex <= vBW <= NBW * 1Mbit/s duplex

vNET, pNET = virtual machine interface represented by tap device in virtualization host system
vBW = available bandwidth
NTB = amount of available traffic in TB
NBW = chosen network speed plan

Please note that:
Due to nature of internet transit and peering the bandwidth which leaves and reaches our network externally depends on transit and peering of other networks as well and provided with our highest quality effort. Before filing a Resource Policy Investigation Case (RPIC) please ensure that the problem occurs within our network and our upstream (Magyar Telekom) network. Tools like traceroute, mtr and ping will help you identify the issue and present us with compelling case to escalate to network engineers.

Was this answer helpful?

Katamaze WHMCS