Cloud Resource policy changes 2024-02-20 21:44:29 - 0000-00-00 00:00:00
CPU
For Central Processing Unit configuration in our cloud platform we use at least Dual AMD Epyc 7352 or Dual Intel Xeon E5-2680 v4 or an equivalent CPU configuration for each host node
The major difference between v(irtual)CPU and d(edicated)CPU is in the core usage guarantee.
For
vCPU
For vCPU the guarantee is floating variable and closely represented by the following formula:
vCPU maximum usage is 100% of 1x hostCPU Core and minimum usage is 25% 42.5% of 1x hostCPU core:
vCPU := 100% >= (1 (1 x hostCPU) >= 25%
vCPU >= (0.425 x hostCPU)
You can monitor CPU availability via steal indicator in top or with mpstat command
/>
dCPU In this example usage is within acceptable range as steal is less than 15%
dCPU
dCPU maximum usage is 100% of 1x hostCPU Core expressed by formula:
dCPU := = 1 x hostCPU
hostCPU is 1 Physical Core of 1 (one) CPU without SMT or HT applied, ie Dual AMD Epyc 7352 has 96 threads but 48 physical cores so 1 hostCPU represent 1/48th of total physical cores in the system
Hyper-Threading (HT) is Intel's proprietary implementation of simultaneous multithreading (SMT), enabling a single physical processor core to handle multiple threads for improved performance. Learn more on Wikipedia
Simultaneous Multithreading (SMT) is a technique that allows multiple independent threads to execute simultaneously on a single physical processor core to enhance parallelism and efficiency. Learn more on Wikipedia
Plan | Total CPU |
---|---|
Base Commerce | 1 vCPU |
Outpost Commerce | 2 vCPU |
Station Commerce | 4 vCPU |
Port Commerce | 8 vCPU |
Base Compute | 1 vCPU |
Outpost Compute | 2 vCPU |
Station Compute | 4 vCPU |
Port Compute | 8 vCPU |
RAM
We use DDR4 ECC Registered RAM with at least 2666Mhz rate for all types of RAM allocation
vMEM Memory is provided by virtual allocation with direct provision of used blocks of host physical RAM:
1MiB vMEM | 1MiB dMEM := Dedup(1MiB hostRAM - zeroBytes)
dMEM Memory is provided by preallocation of host physical RAM:
1MiB dMEM := = 1MiB hostRAM
hostRAM means physical system ram presented provided by hypervisor host systemsystem
zeroBytes means bytes not containing any
Hypervisor is software, firmware, or hardware that creates and runs virtual machines (VMs) by abstracting physical hardware resources, enabling multiple operating systems to run on a single physical host. Learn more on Wikipedia
ZLE (Zero-Length Encoding) is a data (allocated but empty compression algorithm that replaces sequences of zeroes with a special marker and a count, optimizing storage by reducing redundancy.
Learn more on Wikipedia
Deduplication is a data compression technique that eliminates duplicate copies of repeating data to save storage space and improve efficiency, commonly used in backup and storage systems. Learn more on Wikipedia
ECC (Error-Correcting Code) is a type of computer memory pages)that detects and corrects data
Dedup means which corruption to ensure data integrity, commonly used in servers and critical systems.
Learn more on Wikipedia
Registered Memory (RDIMM) is found a type of RAM that includes a register to improve signal integrity and stability, commonly used in other hostRAM pages will not be duplicated but referenced
RAM (Random Access Memory) is a type of volatile memory used by computers to store data and programs temporarily for quick access by the processor. Learn more on Wikipedia
vMEM is virtually allocated on the host system and presented to the instance by reference through hypervisor.
dMEM is preallocated on the host system.
dMEM is preallocated on the host system.
Plan | Total RAM |
---|---|
Base Commerce | 2GB |
Outpost Commerce | 4GB |
Station Commerce | 8GB |
Port Commerce | 16GB |
Base Compute | 1GB |
Outpost Compute | 2GB |
Station Compute | 8GB |
Port Compute | 16GB |
SPACE
vNVME, vSSD vSATA or vHDD space imposes different limits in size, iops and bandwidth on the vBLOCK device.
In the plan listing we only specify the interface without specifying that it is a block device, because in our cloud configuration an interface implies using a block device.This
This means vBLOCK applies to any Cloud virtual plan and dBLOCK applies to any Cloud dedicated plan:
vBLOCK Device space is provided by virtual allocation with direct provision of used blocks of host LVM volumes:
1MiB vBlock := Dedup(1MiB hostBlock - zeroBytes)
dBLOCK Device space is provided by direct allocation of host LVM volumes:
1MiB dBlock := 1MiB hostBlock
vBLOCK devices = any interfaces prepended with vv
dBLOCK
dBLOCK devices = any interfaces prepended with d
Limits and QoS are imposed on all types of IO operations: Sequential Write, Sequential Read, Random Write, Random Read, Mixed.
Interfaces limits in detail:
vNVME := 400.
400.
000 >= IOPS vNVME >= 7000vSSD := 20.
20.
000 >= IOPS vSATA >= 400dNVME := 1.
1.
800.000 >= IOPS dNVME >= 100.000dSSD := 170.
170.
000 >= IOPS dSATA >= 10.000Even with direct allocation, dedicated devices face inherent limitations due to the underlying mechanics of NAND flash memory.
For example:
- Continuous Write Speeds: Many SSDs offer sustained write speeds, such as 700 MB/s, when writing large amounts of data without interruption.
- Momentary Cached Speeds: Some SSDs use a cache (often DRAM or SLC caching) to temporarily boost performance, achieving speeds of 15.000 MB/s or more for short bursts, but slowing down significantly once the cache is full.
Learn more about NAND flash and Solid State Disks
NVMe Devices used in production but not limited to to: Samsung PM17/9xx series, Micron 9xxxMAX/PRO series.
SSD SATA Devices used in production but not limited to to: Micron 5xxx Series, Samsung PM8xx series.
IOPS = input output
IOPS (Input/Output Operations Per Second) is a performance measurement used to quantify the number of read and write
operationsIOBW
IOBW
= input output bandwidth in MB/sDisk Bandwidth is the maximum rate at which data can be read from or written to a storage device, typically measured in megabytes per second (MB/s) or gigabytes per second (GB/s) Learn more on Wikipedia
SATA (Serial ATA) is a computer bus interface that connects storage devices like hard drives and SSDs to the motherboard, offering moderate speeds and affordability. Learn more on Wikipedia
NVMe (Non-Volatile Memory Express) is a high-speed storage protocol designed specifically for SSDs, offering significantly faster performance and lower latency compared to older interfaces like SATA. Learn more on Wikipedia
ZLE (Zero-Length Encoding) is a data compression algorithm that replaces sequences of zeroes with a special marker and a count, optimizing storage by reducing redundancy. Learn more on Wikipedia
Block Device is a type of storage device that manages data in fixed-size blocks, allowing random access to data, and is commonly used for filesystems, hard drives, and SSDs. Learn more on Wikipedia
Here's a table of the currently setup imposed limits for plans:
Plan | Total IOPS | Read MB/s | Write MB/s | ||||
---|---|---|---|---|---|---|---|
Base Commerce | 10000 | 1250 | 1250 | 10.000 | 1.250 | 1.250 | |
Outpost Commerce | 20000 | 1250 | 1250 | 20.000 | 1.250 | 1.250 | |
Station Commerce | 40000 | 2500 | 2500 | 40.000 | 2.500 | 2.500 | |
Port Commerce | 160000 | 4000 | 4000 | 160.000 | 4.000 | 4.000 | |
Base Compute | 15000 | 15.000 | 800 | 800 | |||
Outpost Compute | 15000 | 15.000 | 800 | 800 | |||
Station Compute | 15000 | 15.000 | 800 | 800 | |||
Port Compute | 15000 | 15.000 | 800 | 800 | |||
Base Transit | 100 | 100 | 100 | ||||
Outpost Transit | 100 | 100 | 100 | ||||
Station Transit | 100 | 100 | 100 | ||||
Port Transit | 100 | 100 | 100 |
NETWORK
We use 2x 10Gbit/s 10 gbps LAG or 2x 40Gbit/s 40 gbps LAG uplink to each host node. This section describes only the virtualized devices and their share of uplink resources.
vNET
vNET
Cloud instances share bandwidth of the uplink with a guaranteed minimum of 400Mbps of uplink. 400 mbps per each instance. The maximum available to instance is subject to hard limit at 30% of total node capacity 10 gbps and instance CPU performance.
vNET, pNET = virtual machine interface represented by tap device in virtualization host system
vBW system
vBW = available bandwidth
NTB bandwidth
NGB = amount of available traffic in TB
NBW GB
NBW = chosen network speed plan
Please note:
Due to bandwidth
We ensure that the nature site uplink is capable of providing bandwidth at a full rate by imposing limits and traffic shaping and maintaining at least 20% of unused capacity as a reserve.
Upon reaching the limit the customer will be billed per plan specifications
Plan | Uplink speed | Download limit | Upload limit |
---|---|---|---|
Base Commerce | 10 gbps | unmetered | 12.000GB |
Outpost Commerce | 10 gbps | unmetered | 24.000GB |
Station Commerce | 10 gbps | unmetered | 50.000GB |
Port Commerce | 10 gbps | unmetered | 120.000GB |
Base Compute | 10 gbps | unmetered | 4.000GB |
Outpost Compute | 10 gbps | unmetered | 12.000GB |
Station Compute | 10 gbps | unmetered | 20.000GB |
Port Compute | 10 gbps | unmetered | 24.000GB |
Notes:
Because internet transit and peering involve multiple networks, the external bandwidth which leaves entering and reaches leaving our network externally depends on the transit and peering arrangements of other networks as well. That is Therefore, before filing a Resource Policy Investigation Case (RPIC) (RPIC), please ensure that the problem issue occurs within our network (ServerAstra AS56322) network. AS56322). Tools like traceroute, mtr MTR, and ping will can help you identify the issue problem and present provide us with a compelling case to escalate to our network engineers.
Uplink refers to the connection or transmission of data from a local device or network to a higher-level network, such as from a client to a server or from a local network to the internet. Learn more on Wikipedia
Duplex in telecommunications refers to the ability of a communication system to send and receive data simultaneously (full-duplex) or alternately (half-duplex) between two devices. Learn more on Wikipedia
TAP Device is a virtual network kernel device that operates at the data link layer, emulating an Ethernet device and facilitating packet-based communication for network bridging or tunneling. Learn more on Wikipedia
Link Aggregation (LAG) is a networking technique that combines multiple physical network connections into a single logical connection to increase bandwidth, improve redundancy, and balance traffic loads. Learn more on Wikipedia