CLOUD RAN
CLOUD RAN Abstract Mobile broadband is immensely important globally as a key socio-economic enabler, as evidenced by the continuing growth of data traffic on mobile networks. To meet this unabated growth in demand, cellular operators must increase their network capacity by using advanced wireless technologies like adding more network elements like cell sites, controllers, etc. According to growth estimation data, data traffic increases by 131 percent every year, while air interface grows 55 percent yearly. At the same time, ARPU is constantly decreasing. Per UMTS Forum Report 44, the total worldwide mobile traffic will reach more than 127 Exabytes in 2020, which is 33 times more than the 2010 figure. Significantly, at least 80 percent of
and capacity due to interference. This also requires more radio network controllers. Radio Access Network (RAN) architecture requires solutions in the following areas: > Additional base stations and radio antennas without increasing the number of cell sites > Reconfigurable BSs to support multiple technologies > Resource aggregation and dynamic allocation > Cooperative radio technology for coordinated multi point transmission and reception > More capacity and coverage with reduced interference > Distributed antenna technology for increased coverage > Controller software enhancement to run on virtualization
the traffic volume remains generated by users, leading to large
environment for lower costs and elastic capacity
variations in the total mobile traffic, in terms of time and space
> Summarily reduce Capex and Opex, and overall TCO
variations of traffic. Future mobile networks must be designed to cope with such variation of traffic and uneven traffic distribution, while at the same time maintaining permanent and extensive geographical coverage in order to provide continuity of service to customers. In 2020, daily traffic per Mobile Broadband subscription in the representative Western European country
This white paper provides an overview of the distributed RAN architecture called Cloud RAN, which addresses solutions for the different areas mentioned previously. It also provides a more detailed analysis of the Cloud radio network controller architecture.
will stand at 294 MB, and at 503 MB for dongles (67 times greater than in 2010). The cost of acquiring a new spectrum, deploying new wireless
Introduction
carriers, and evolving network technologies (e.g., from GSM to
In a conventional cellular network, the antenna, RF equipment,
W-CDMA to LTE), while adding more processing capacity, new
digital processor, and baseband unit (BTS) sit in the cell site as
radios, and antennas—and managing the resulting heterogeneous
shown in the Conventional Cellular Network diagram on the
network—is becoming economically unsustainable and leads
next page. This requires more power and real estate space, and
to a vicious cycle of demand.
additional directional antennas and big cell towers to support multi-frequency bands and new air interface technologies like LTE.
An increase in the number of base stations is resulting in more
Enhancing a conventional network to support data traffic demand
power consumption, higher interference, and reduced coverage
in a current wireless network is economically unsustainable.
Cloud RAN
1
Active Antenna Array
Urban Zone
In order to support increasing bandwidth demand, operators need to enhance their network to support multiple technologies,
(BTS)
multiple frequency bands, and new air interface technologies. This requires new antennas to be installed, multiple directional antennas to support MIMO, beam forming, Rx diversity, etc.
(BTS)
This also increases the number of antennas in an already dense
(BTS)
BSC
MSC
network, which in turn increases interference between different cells and reduces the capacity of the cell. The end result is
Internet
increases site costs. In the Active Antenna array solution, each element supports a connection to a separate transceiver element. The antenna array
Base Station (BTS)
can support multiple transceivers, which addresses the problem
Rural Zone
of installing multiple antennas to support multiple air interface technologies, MIMO, beam forming, Rx diversity, etc. Conventional Cellular Network
Each active antenna array has the transceivers (RF and digital components) hardware embedded with each antenna element
There is an immediate need to identify a solution that reduces the
inside the antenna array, rather than outside in a separate RF
number of cell sites, effectively reuses resources, and employs
box called RRH or in a conventional TRDU/TMA. This reduces
reconfigurable basebands, multi-band radios, and distributed wideband antennas to support different air interface technologies.
loss due to the RF connection between the antenna and external
Cloud RAN architecture is based on distributed radio access
fed into different antenna elements to create focused vertical
network architecture consisting of the following network
beams per each user, carrier, technology, etc., which can control
elements:
the interference and increase cell capacity and coverage.
RF. With the built-in transceivers, the individual signals can be
> Active antenna arrays > Multi-band radio remote heads > Centralized baseband units
Multi-band Radio Remote Heads
> Metro cells
In conventional networks, BTS/NodeB contains radio (RF and
> Radio network controllers on cloud
digital components) and baseband units connected to an antenna
> Common management server
using coaxial cables.
> SON server for seamless management and optimal
The Open Base Station Architecture Initiative (OBSAI) and the
network usage
Common Public Radio Interface (CPRI) standards introduced standardized interfaces separating the server and the radio
Active Centralized Antenna Baseband System Bank > 2G/2.5G Optical
> UMTS > HSPA > LTEeNB > LTE-A
SON Server
Common Management Server
IMS/ Operator Services
Coax Remote Radio Head Macro Site Femto Cells/ WiFi
IP
IP IP
Controllers on RAN Servers > GSM/GPRS Cloud > UMTS > UMTS Femto GW > HeNBGW > WiFi Access Gateway
Internet Core Network
Figure 1: CRAN Access Technology Cloud
Cloud RAN
2
part of the base station, the latter of which is supported by the
The centralized baseband is built on the concept of Software
Remote Radio Heads (RRH).
Defined Radio (SDR) with use of distributed radio signal
A separate RRH is required for each frequency band to support multiple frequency bands and multiple sectors in a given geographical area. The number of RRH required proportionally increases, and in many of the macrocell deployments, RRH is in the top of the cell tower with the antenna to reduce the RF loss. In denser network deployments, increasing the number of RRH may not be feasible in all deployments, so RRH may have to be deployed on high-rise buildings, etc. This increases the overall cost, RF loss, and maintenance costs.
processing and baseband processing units, which are software configurable and reduce the complexity of deploying BBU at the location of the cell site. The increase in additional carriers, spectral bandwidth, new technologies, etc. can be seamlessly supported by stacking a number of baseband units in the baseband pool and deploying remote MB-RRH and AAA with comparatively less cost and easy maintenance. The baseband and radio signal processing is distributed using the CPRI interface between BBU and remote radio equipment.
Multi-band RRH (MB-RRH) are supported by multiple vendors
The Common Public Radio Interface (CPRI) is an industry
for addressing the issues mentioned above. It can support
cooperation aimed at defining a publicly available specification
multiple frequency bands and multiple technologies like GSM,
for interface between the Radio Equipment Control (REC) and
WCDMA, and LTE in combination with the RRH units. This reduces
the Radio Equipment (RE), which in our case is the BBU and
the number RRH required to support multiple frequency bands
Remote Radio Head respectively. The scope of the CPRI
and different technologies, while reducing the cell site costs,
specification is restricted to the link interface only (layer 1 and
power consumption, and complexity.
layer 2), which is basically a point-to-point interface. The Open Base Station Architecture Initiative (OBSAI) was introduced to standardize interfaces separating the Base-Station server
Centralized Baseband Units
and the radio part of the base station. Figure 2 depicts a CRAN architecture utilizing CPRI or OBSAI interface.
In typical macrocell deployments, the baseband unit is located at the base of the cell tower along with the radio and other digital
Key features of this architecture (Architecture A) are:
equipment. The cost of deploying new baseband units along
> Cells are distributed across processors and flexibly connected
with radios, antennas, etc. to support additional carriers, spectral
to radio unit through high bandwidth (order of Gbps) optical
bandwidth, different technologies, etc. and managing the
fiber links
heterogeneous network is becoming economically challenging
> Board level, link level redundancy could be provided
and unsustainable.
> High-speed communication across sectors for efficient inter-cell information sharing for cooperative/coordinated
Cloud RAN Unit
High Speed
Unit
Unit M RRC, S1-AP, X2-AP, RRM, SON
RRC, S1-AP, X2-AP, RRM, SON
Layer 2 - Cell 1
Layer 2 - Cell 2
Layer 2 - Cell n
Layer Layer 22 - Cell - Cell 11
Layer 2 - Cell 2
Layer 2 - Cell n
Layer 1 -
Layer 1 -
Layer 1 -
Layer 1 -
Layer 1 -
Layer 1 -
CPRI/OBSAI Engine
CPRI/OBSAI Engine
CPRI/OBSAI link over Fiber
Figure 2: CRAN Architecture A: Utilizing CPRI/OBSAI Link
Cloud RAN
3
radio resource management, scheduling, and power control
and modification in MAC will be required. A portion of MAC
to optimize cell throughput and interference reduction
should also run in the baseband unit in the antenna site to control the time-critical L1 interface and relay messages
> Reduced need for hardware at antenna sites
between Cloud MAC and antenna Layer 1.
> Utilizes optical links where already available to avoid laying
> High-speed communication across sectors for efficient
new links, which may make infrastructure expensive
inter-cell information sharing for cooperative/coordinated
The main disadvantage of this approach is the high-bandwidth
radio resource management, scheduling, and power control
link required between radio equipment and the central unit.
to optimize cell throughput and interference reduction
For example, CPRI supports different line-bit-rate options ranging
The main advantage of option B is it requires cheaper and lower
from 614 Mbps to 6.14 Gbps. Overlaying such high-bandwidth
bandwidth IP links between the cell site and central unit. However,
connections is a costly prerequisite and can be a big barrier to
the cell site will require more hardware compared with option
this solution becoming popular. To overcome this problem, if the split between radio equipment and control unit can be moved higher up the network stack (i.e., from below Layer 1 to between
A because Layer 1 and some part of Layer 2 are being executed in the cell site. In addition, the end-to-end latency increases due to IP link delay and variance characteristics.
Layer 1 and Layer 2), then instead of sharing IQ samples, only the demodulated and decoded data and protocol information need to be shared over an IP-based link between the remote
BBU POOLING:
unit and the central unit. This considerably reduces the
The pooling of processing resources for multiple cell sites at a
bandwidth requirement to approximately 200 Mbps for a 2x2
central location (utilizing architecture option A or B) has many
MIMO, 20 MHz cell. Figure 3 depicts CRAN Architecture utilizing
benefits. Based on the capacity, coverage, and number of air
IP link between radio unit and the central unit.
interface technologies to support, additional BBU can be easily added and remotely managed. The cell sites need to have only
Key features of Architecture Option B are:
RRH and antennas; this reduces the huge space, power
> Cloud RAN unit is connected with relatively low-bandwidth
consumption, and management overheads of the cell site.
(order of 100 Mbps) IP links to Radio equipment site—IP connectivity should be through operator-managed network so that there is strict control over latency and jitter > Antenna site terminates IP links and carries out Layer 1
KEY BENEFITS OF BBU POOLING Capex and Opex reduction
processing according to air Interface timing
The hardware can be pooled across multiple cell sites in order
> Layer 3 and Layer 2 located in Cloud RAN unit. To handle impact of latency of IP link on 1ms, strict scheduling of LTE
to reduce the initial capital costs, as well as regular running (electricity, site rental, etc.) and maintenance costs.
High Speed
Antenna Site 1
Cloud RAN Unit M
Site Management
Cloud RAN Unit 1 RRC, S1-AP, X2-AP, RRM, SON Layer 2 - Cell 1
Layer 2 - Cell 2
Layer 2 - Cell n
MAC Layer 2 (partial) - Cell 1
MAC (partial)
MAC (partial)
Layer 1 -
Layer 1 -
Layer 1 -
IP Link
IP Link Antenna Site N
Delay IP Link
Figure 3: CRAN Architecture B: IP Link between Cloud RAN Unit and Antenna Site Equipment
Cloud RAN
4
Load Aggregation and Balancing:
The metrocells can be deployed on lamp posts, buildings, etc.
Baseband processing for multiple cell sites is aggregated based
and are connected to the operator core network through the
on the bandwidth requirement not increasing the number of cell
IP backhaul. These cells can be deployed in both indoor and
sites. The BBU units can be dynamically distributed to different
outdoor environments.
cell sites based on the usage patterns.
This provides an economically viable solution for the operator
Multiple Technologies Support
to increase cell density with less cost, efficient spectrum usage,
The BBU units can be dynamically configured to support different
and less time taken to extend capacity and coverage.
air interface technologies based on network load and service requirements. High Availability
Radio Network Controllers on Cloud
The BBU pool has number of BBU units. During the failure of
As defined by NIST, cloud computing is a model for enabling
any single BBU, other active BBUs can share the load of the
ubiquitous, convenient, on demand network access to a shared
failed BBU, so that it can seamlessly recover. During multiple
pool of configurable computing resources (e.g., networks,
BBU failures, the active BBU units can be dynamically configured
servers, storage, applications, and services) that can be rapidly
to share traffic loads from a number of cell sites supported by
provisioned and released with minimal management effort and
the BBU pool.
service provider interaction.
Cooperative Multi-point Operation (CoMP)
The radio network controllers in the cloud RAN solution are built
The BBUs connected to different cell sites are located in a
using this cloud-computing model to support GSM BSC, UMTS
centralized location, allowing the cell site information related
RNC, HeNB-GW, MME, WiFi-GW functions with increased capacity,
to signaling, traffic data, resource allocation, channel status,
in addition to multiple technologies. This cloud computing
etc. can be easily shared between BBUs. This information can
model can also be extended to CN elements for supporting
be used to optimize the allocation of resources, handovers, call
flexible open architecture to increase capacity, different
handling, scheduling for Inter Cell Interference Control (ICIC)
technologies, effective reuse of resources, and high availability.
and improve spectral efficiency. The CoMP and ICIC are the key requirements of the LTE-A in the 3gpp Rel-11 specifications.
Traditionally, radio access network controllers like BSC,
Because the BBUs support macrocells and small cells, the
RNC, H(e)NB-GW, etc. are built on specific hardware with
coordinated multi-site processing helps optimize the mobility
customization. The controller application can only run on specific
and ICIC between heterogeneous networks.
hardware and software solutions, and are built for supporting estimated capacity. The available resources are never used to
SON Support
their full capacity, which increases the TCO, time to market,
The shared information of BBUs can be used for advanced
and dependency on specific hardware and software vendor
SON features to optimize the various services. The SON can
solutions.
dynamically configure resources to be used for the cell site processing, optimize the handover between cells, manage inter-RAT handovers, conduct cell-load balancing, and efficiently use HW resources. During very low load conditions, some of
Software as Service (SaaS)
End Application like controller applications
Platform as Service (PaaS)
Application platform or middleware as a service
Infrastructure as Service
Cloud HW, CPU, Core, Disks, Fabric
the BBUs can be switched off to save energy and help achieve green BTS.
Metrocells As mentioned before, adding more macro cells to support increased capacity and coverage is not an optimal solution. In an effort to reduce the load on the macrocells, and to provide higher capacity and greater coverage, operators are deploying offloading solutions where the macrocells are offloaded to lowcapacity, lowpower small cells called metrocells.
Cloud RAN
Cloud Computing Service Models Figure 4: Cloud Computing Service Models
5
Cloud computing architecture defines three different service
and the operating system it runs is called the guest. Each guest
models, as shown in the Figure 5 below, where COTS solutions
OS instance running on VM acts as an individual server for the
can be used in different service layers to avoid using customized
application. The diagram below shows the overview of the
hardware and software solutions from specific vendors.
virtual servers.
The radio network controller applications in the cloud computing
A virtual machine (VM) is a software implementation (i.e., a
environment still need all the software and hardware layers as
computer) that executes programs like a physical machine.
in the traditional telecom equipment. But hardware virtualization,
Virtual machines are separated into two major categories based
OS abstraction layers, and middle layers are provided to the
on their use and degree of correspondence to any real machine.
application through virtual service layers so that it can remain
A system virtual machine provides a complete system platform
independent of underlying hardware and software components.
that supports the execution of a complete operating system (OS), while a process virtual machine is designed to run a single
Cloud computing is in the very early stages of adaption in the
program and support a single process.
telecom controller space. Using controller applications as SaaS on the different vendor PaaS and IaaS is still a common interface
A system virtual machine (virtual hardware), which provides
supported by multiple vendors that is still evolving. The standard
an abstraction of a simple x86 PC with private CPU, memory,
bodies like NIST and ETSI are working to define a standard
network interface (NIC), and file system, is used for controller
interface for the different service layers.
virtualization. Each VM is independent of the VMM and other VMs.
Per NIST, generally, interoperability and portability of customer
When the number of VMs increases complexity of I/O traffic, and
workloads is more achievable in the IaaS service model because
hardware handling in VMM increases, application handling
the building blocks of IaaS offerings are relatively well-defined
significantly slows down compared with a non-virtualization
(e.g., network protocols, CPU instruction sets, legacy device
environment.
interfaces, etc.).
The PCI-SIG has defined a standard for how to virtualize SR-
The IaaS layer is supported by multiple vendors through their
IOV (Single Root I/O Virtualization) where a physical device
COTS virtualization solutions. A hypervisor called the virtual
implements hundreds of images of itself, one for each VM.
machine manager provides hardware virtualization so that
Each VM communicates with its own set of I/O queues, which
multiple operating systems are able to run concurrently on a
can directly use the device without the performance cost of
host computer. The virtual hardware is called a virtual machine
going through a VMM while ensuring isolation between the VMs.
OS
OS App App
OS
OS
App DOM U
App
Hardware
OS
OS
App OS
DOM U
App Hardware
OS App
Before: 3 different servers for 3 operating systems and services
Figure 5: Virtual Servers
Cloud RAN
App
OS App
OS App
Hardware
OS
OS App
Hardware
Hardware
Hardware
App
Hardware
DOM U
After: Only 1 server required for 3 different operating systems and services
6
VMware supports this technology with its ESXi VMM called the
Using a cloud computing environment for radio network
VMDirectPath. The VMDirectPath I/O allows a guest operating
controllers has the following advantages:
system on a virtual machine to directly access physical PCI and PCIe devices connected to a host. Each virtual machine can be connected to up to two PCI devices. PCI devices connected to a host can be marked as available for pass-through from the hardware advanced settings in the configuration for the host. Intel and AMD support hardware-based assistance for I/O virtualization processes and complement single-root I/O virtualization. Intel’s name for this technology is VT-d, while AMD’s version is ADM-Vi.
Hardware Independence Controller software can run on COTS hardware available from different HW vendors, hence no binding with customized hardware solutions. Different applications can run on the same hardware so that available resources can be used on demand. Software Independence Application software can run on COTS virtual machines available from different vendors as IaaS. The application is independent of the actual hardware used, so it can run on different hardwares
The controller applications in the cloud environment are based
with no application software changes. There is also no proprietary
on third-party IaaS layer interfacing with guest OS/virtual
software supporting hardware independence.
machine or IaaS in the service-layer hierarchy. All software layers like guest Os, middle layers, controller-specific OAM, controller application, etc. which are above IaaS are provided by TEMs. The guest OS can be any standard OS like Linux, VxWorks, Solaris, etc. depending on the application architecture. The virtual server/cluster management is part of third-party IaaS solutions. This provides the mechanism to manage the virtualization environment, control the execution of the virtual machine, and loading the associated applications. Some of the key functionalities supported by virtual machine management are: > Centralized control and deep visibility into virtual infrastructure (create, edit, start, stop VM) > Proactive management to track physical resource availability, configuration, and usage by VMs > Distributed resource optimization
Resource Pooling The different hardware types can be pooled to run multiple instances of application software to support increased capacity. The resources can be dynamically allocated, with different applications running on the same hardware. High Availability Using pooled resource to run controller applications takes care of single or multiple units failing within a pool of resources, while providing geo-redundancy, multi-tenancy, and elasticity. Reduced CAPEX Usage of the COTS hardware and software reduces TCO and time to market. Reuse of available resources with dynamic allocation helps use the full capacity of the resources, thus reducing the number of resources required.
> High availability
Reduced OPEX
> Scalable and extensible management platform
Use of common hardware and software reduces the cost of
> Security
managing different customized solutions. The resource can be affectively used depending on the load conditions. Based on
There are multiple vendors supporting centralized control at
demand, some of the resources can be switched off in order
the different levels in the virtualization environment. The VMware
to reduce electricity and other infrastructure costs (e.g.,
vCenter is one such solution that supports scalable and
cooling, etc.).
extensible management platforms as shown in the diagram on the next page.
Elasticity, Best of Class Performance The capacity of the system can change quickly according to
The operator can host the controller application software on
need. The controller applications (RNC, BSC, etc.) run in virtual
the operator’s own private cloud or on a service provider’s cloud
machines independent of the physical hardware. Third-party
(community or public).
virtualization technology from different vendors can be used to host the application-specific OS, middleware, and applications. There are multiple vendors providing the virtualization IaaS layer. Some of the key solutions are VMware, KVM, and WR hypervisor.
Cloud RAN
7
An example of radio controller application on cloud environment is shown in the following diagram: Different Applications, middle layer, OAM, etc.
VMM
COTS VM Manager BSC
RNC
M/W
M/W
H(e)NBGW M/W
Guest OS Guest OS
Guest OS
Guest OS VM
Guest OS VM
VM Disk
Guest OS
VM Disk
V-10
VM
V-10
VM Disk
Guest OS
VM Disk
V-10
VM
V-10
Disk
Disk
COTS SW/HW Virtual HW
Virtual HW
Virtual HW
Core OS CPU
Hypervisor
Disk Fabric HW
IO HW
CPU Physical Hardware (Servers or ATCA)
Figure 6: Controller Application Over IaaS layer Multiple applications can run on the single platform with different
decentralized algorithms as applicable at each individual network
VMs running different OSs using a multi-tenant model. In a
element. The operator may support multiple technologies like
multi-core environment, different applications can run on a
GSM, WCDMA, and LTE in the cloud RAN deployment. This
different core with associated VM, guest OS, middle layer, and
requires network-level self-optimization to support automatic
applications. The different controller applications allow common
updates of network topology changes between E-UTRAN/
cloud computing architecture to dynamically use available
UTRAN/GERAN networks.
resources.
Information related to network load, performance, etc. of the different wireless technologies is used by the centralized function
Common Management Server
to dynamically allocate shared resources to different network
As previously mentioned, operators use more than one RAT to
example, when the GSM load is less but the UMTS is in the peak,
support wireless data traffic demand. The converged solutions
the shared NEs like AAA and RRH can be configured to support
AAA, RRH, multi-standard BBUs, and radio network controllers
additional cells, frequency bands, etc. When the network load is
are used to support multiple technologies. Management of these
low, the set of network elements can be switched off wherever
converged network elements requires a common management
the load can be handled by a minimum set of network elements.
elements in the cloud RAN and support load balancing. For
server capable of supporting the FCAPS features for GSM, UMTS, and LTE network nodes.
SON Functions
Conclusion and Aricent Value Proposition As discussed in the previous sections, the complexity of
In cloud RAN network architecture, each network element is
enhancing traditional networks to support increasing broadband
capable of supporting self-configuration, optimization, and
capacity and coverage is not economically viable. There is
autonomous recovery. SON, in this architecture, is based on
immediate need to deploy distributed networks with centralized
Cloud RAN
8
baseband units, RRHs, AAA, and radio network controllers on
and multiple instances of Layer 2 can be utilized to handle
the cloud to reduce the complexity of introducing addition cell
multiple cells/sectors.
sites and adding additional antennas and radio components.
> eNodeB software is modified to handle IP link (architecture
The Radio network controllers on the cloud environment using
option B described previously) interface between cell site
virtualization technology reduce the infrastructure cost to
unit and the central unit.
support both multiple technologies and the complexity of managing multiple network elements. In the 3rd Generation Partnership Project (3GPP) international
Enhanced Packet Core Modules
standardization group meeting held in June, 2012, “energy
> RAN on the cloud must cater to variable capacity
saving,” “cost efficiency,” and “support for diverse application
requirements and host multiple cells. Aricent Layer 3 and
and traffic types” were identified as priority areas for Release
Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are
12. Deploying a cloud RAN architecture-based network can
scalable for multi-core architectures, support multiple form
address these requirements. The NGMN group also initiated a
factors (femto, pico, micro) and different capacity
“CENTRALISED PROCESSING, COLLABORATIVE RADIO,
requirements based on deployment.
REAL-TIME CLOUD COMPUTING, CLEAN RAN SYSTEM (P-CRAN) [11]” project to address these issues. Implementation of a cloud RAN solution can save CAPEX up to 15 percent and OPEX up to 50 percent over five to seven compared with traditional RAN deployment, per the China
> Single instance of Aricent Layer 3 can handle multiple cells/ sectors hosted on cloud RAN equipment and can interface with cells/sectors hosted on other cloud RAN equipment on the X2 link. > Aricent Layer 2 can handle one cell/sector per instance and
Mobile report [1]. According to the Alcatel-Lucent Light Radio
multiple instances of Layer 2 can be utilized to handle
Economics analysis [2], these disruptive RAN architecture
multiple cells/sectors.
designs and innovative features can reduce overall TCO by at
> eNodeB software is modified to handle IP link (architecture
least 20 percent over five years for an existing high-capacity
option B described previously) interface between cell site
site in an urban area — with at least 28 percent reduction for
unit and the central unit.
new sites. Aricent is actively participating in and following emerging C-RAN architecture initiatives. Aricent eNodeB, EPC, and HeNB-GW
Universal SON Server (UniSON)
IPRs are ready for CRAN architecture.
> RAN on the cloud must cater to variable capacity requirements and host multiple cells. Aricent Layer 3 and Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are
eNodeB Framework > RAN on the cloud must cater to variable capacity requirements and host multiple cells. Aricent Layer 3 and Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are scalable for multi-core architectures, support multiple form factors (femto, pico, micro) and different capacity requirements based on deployment. > Single instance of Aricent Layer 3 can handle multiple cells/ sectors hosted on cloud RAN equipment and can interface with cells/sectors hosted on other cloud RAN equipment on the X2 link. >
Aricent Layer 2 can handle one cell/sector per instance
Cloud RAN
scalable for multi-core architectures, support multiple form factors (femto, pico, micro) and different capacity requirements based on deployment. > Single instance of Aricent Layer 3 can handle multiple cells/ sectors hosted on cloud RAN equipment and can interface with cells/sectors hosted on other cloud RAN equipment on the X2 link. > Aricent Layer 2 can handle one cell/sector per instance and multiple instances of Layer 2 can be utilized to handle multiple cells/sectors. > eNodeB software is modified to handle IP link (architecture option B described previously) interface between cell site unit and the central unit.
9
Additionally, Aricent is involved multiple services projects related
EMS
to solution architecture, implementation, and field support of
Universal SON Server
C-RAN solutions. This includes Tier 1 OEMs in the area of multiTR69,
RAT BTS, virtual common hardware for RNC/BSC solutions, etc. Aricent is well-equipped to provide software frameworks, (eNodeB, EPC etc.), necessary resources, management framework
ENODEB
and a strong delivery process to assist our customers for their SON Client
own C-RAN solution.
REFERENCES (1) http://www.google.com/url?sa=t&rct=j&q=china+mobile+c-ran&source=web&cd=1&ved=0CE0QFjAA&url=http%3A%2F%2Flabs. chinamobile.com%2Farticle_download.php%3Fid%3D63069&ei=ebXyT6uBAc7LrQfRnK2rCQ&usg=AFQjCNFDC6S_4Oth6_0vLobNzvfvrlouHw (2) http://www.alcatel-lucent.com/wps/DocumentStreamerServlet?LMSG_CABINET=Docs_and_Resource_Ctr&LMSG_CONTENT_FILE=White_ Papers%2FlightRadio_WhitePaper_EconomicAnalysis.pdf&REFERRER=j2ee.www%20%7C%20%2Ffeatures%2Flight_radio%2Findex. html%20%7C%20lightRadio%3A%20Evolve%20your%20wireless%20broadband%20network%20%7C%20Alcatel-Lucent (3) http://www.vmware.com/products/vcenter-server/overview.html (4) http://www.vmware.com/products/vsphere/mid-size-and-enterprise-business/overview.html (5) http://www.obsai.com/obsai/content/download/4977/41793/file/OBSAI_System_Spec_V2.0.pdf (6) http://www.cpri.info/downloads/CPRI_v_5_0_2011-09-21.pdf (7) http://csrc.nist.gov/publications/drafts/800-146/Draft-NIST-SP800-146.pdf (8) http://collaborate.nist.gov/twiki-cloud-computing/pub/CloudComputing/RoadmapVolumeIIIWorkingDraft/NIST_cloud_roadmap_VIII_ draft_110311.pdf (9) http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf (10) http://www.umts-forum.org/component/option,com_docman/task,doc_download/gid,2545/Itemid,213/ (11) http://www.ngmn.org/workprogramme/centralisedran.html
Cloud RAN
10
Aricent is the world’s premier engineering services and software company. We specialize in inventing, developing and maintaining our clients’ most ambitious initiatives. Combining more than 20 years of engineering expertise with a force of more than 10,000 dedicated product engineers, Aricent is the only company in the world that list of global companies, bringing the next generation of breakthrough, innovative products to market. frog, the global leader in innovation and design, based in San Francisco is part of Aricent. The company’s key investors are Kohlberg Kravis Roberts & Co. and Sequoia Capital.
[email protected]
© 2014 Aricent. All rights reserved. All Aricent brand and product names are service marks, trademarks, or registered marks of Aricent in the United States and other countries.