<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Embedded systems Archives - Tauro Technologies</title>
	<atom:link href="https://taurotech.com/blog/tag/embedded-systems/feed/" rel="self" type="application/rss+xml" />
	<link>https://taurotech.com/blog/tag/embedded-systems/</link>
	<description>IoT and Embedded Systems Development</description>
	<lastBuildDate>Fri, 03 Apr 2026 11:07:11 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Embedded Systems and Low-Power Design</title>
		<link>https://taurotech.com/blog/embedded-systems-and-low-power-design/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=embedded-systems-and-low-power-design</link>
		
		<dc:creator><![CDATA[Sargis Ghazaryan]]></dc:creator>
		<pubDate>Thu, 16 May 2024 17:56:37 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[Hardware design]]></category>
		<category><![CDATA[bluetooth]]></category>
		<category><![CDATA[Communication Protocols]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[firmware development]]></category>
		<category><![CDATA[hardware design]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[low power]]></category>
		<category><![CDATA[nb-iot]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=3295</guid>

					<description><![CDATA[<p>Embedded Systems and Low-Power Design An embedded system refers to a specialized computer system designed to perform dedicated functions within a larger mechanical or electrical system. It typically consists of a combination of hardware and software components tailored to perform specific tasks or functions. Embedded systems play a crucial role in mobile robotics, UAV construction&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/embedded-systems-and-low-power-design/">Embedded Systems and Low-Power Design</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center"><strong>Embedded Systems and Low-Power Design</strong></h1>



<p>An embedded system refers to a specialized computer system designed to perform dedicated functions within a larger mechanical or electrical system. It typically consists of a combination of hardware and software components tailored to perform specific tasks or functions. Embedded systems play a crucial role in mobile robotics, UAV construction and edge AI. Such systems are characterized by their real-time operation, reliability and efficiency in executing predetermined functions, often with limited resources such as processing power, memory and energy. In remote areas, for example, everything is run off batteries or generators. Consequently, many embedded systems are engineered to incorporate various techniques to extend battery life. The others simply need to consume less energy based on other factors. As a result, there&#8217;s an increasing demand for designs that minimize energy usage while maintaining high performance. In this article we are going to elaborate on the strategies for achieving low-power designs and highlight their significance in embedded systems.</p>



<h2 class="wp-block-heading"><strong>The Need for Low-Power Design</strong></h2>



<p>Low-power design involves strategies and approaches aimed at decreasing the energy usage of electronic devices and refers especially to the underlying embedded systems upon which such devices operate. Examples of such devices are battery-powered devices, processors, IoT wireless sensor networks and many more. Through the application of low-power design methods, engineers can create high-quality and reliable equipment which consume considerably less energy without any indication of performance degradation. The need for low-power devices arises from several factors:</p>



<ul class="wp-block-list">
<li>Power sources are often limited and the disruption in the energy supply can result in adversities. This is particularly true for battery-powered devices in military situations where power outages can cost lives. That’s why defense is always looking for lower power consumption in airborne and ground vehicle applications.</li>



<li>Portability of everyday devices (notebooks, smartphones, etc.) which will have prolonged battery life is one of the concerns of device manufacturing companies. In today’s world, it is a common tendency for customers to have a preference for devices with extended battery life.</li>



<li>Low-power design will certainly have a huge positive impact on the environment as a large amount of electricity is wasted through devices connected to the grid. The decrease in the electricity consumption of such devices will result in less costs and will cause less damage to the environment.</li>



<li>In embedded systems, high power consumption can result in a significant amount of heat generation damaging the system components. The reduction of generated heat is one of the concerns for military equipment production. As a fact, the overall decrease in power consumption will considerably reduce the generated heat. Consequently, initially employing low-power design techniques will protect the system from unexpected side effects due to thermal issues.</li>



<li>Less heat generation can lead to improved performance and reliability of the embedded system. Overheating can cause performance degradation or even hardware failures, so by keeping temperatures within acceptable limits, low-power designs contribute to overall reliability and durability of the system.</li>
</ul>



<h2 class="wp-block-heading"><strong>Key Principles of Low-Power Design</strong></h2>



<p><span id="docs-internal-guid-de688e67-7fff-0b5b-0870-bdf02b0642bd"><span style="font-size: 12pt; font-family: Roboto, sans-serif; color: rgb(13, 13, 13); background-color: transparent; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-alternates: normal; font-variant-position: normal; vertical-align: baseline;">To grasp the fundamental principles of low-power design, it&#8217;s imperative to dive into power consumption basics, sleep modes, clock gating techniques and voltage scaling strategies. This exploration will shed light on how each aspect contributes to the creation of energy-efficient embedded systems.</span></span></p>



<h3 class="wp-block-heading"><strong>Power Consumption Basics</strong></h3>



<p><span id="docs-internal-guid-17cf6d03-7fff-1eea-4ba5-6ffe4eea1fc8"><span style="font-size: 12pt; font-family: Roboto, sans-serif; color: rgb(13, 13, 13); font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-alternates: normal; font-variant-position: normal; vertical-align: baseline;">Power consumption indicates how much electrical energy a device or a system uses to perform its functions or operations. </span><span style="font-size: 12pt; font-family: Roboto, sans-serif; color: rgb(13, 13, 13); background-color: transparent; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-alternates: normal; font-variant-position: normal; vertical-align: baseline;">There are two primary sources of power consumption in electronic devices &#8211; static and dynamic. Devices consume static power when idle and dynamic power during active use. Reducing both static and dynamic power consumption is essential for creating low-power designs achieved through the means of efficient components and optimized circuits. Understanding consumption allows informed decisions on resource allocation and environmental impact mitigation. Embracing energy-efficient practices drives towards sustainability while ensuring reliable access to necessities.</span></span></p>



<h3 class="wp-block-heading"><strong>Power Management and Sleep Modes</strong></h3>



<p>Implementing sleep modes and power states can significantly reduce power consumption in embedded systems. Sleep modes enable devices to enter low-power states when not performing tasks, therefore conserving energy. Power states define consumption levels based on system activity and performance needs. Selecting appropriate modes ensures optimal power usage and performance while maintaining efficiency.</p>



<p>All sleep modes are accessible from active mode, where the CPU executes application code. Upon entering sleep mode, program execution halts, and the device relies on interrupts or a reset for waking up. The application code determines the timing and choice of sleep mode. Enabled interrupts from peripherals and reset sources can return the CPU from sleep to active mode. Furthermore, power reduction registers offer means to halt individual peripheral clocks via software control. This action freezes the peripheral&#8217;s current state, eliminating its power consumption. Consequently, power usage is minimized in both active mode and idle sleep modes, facilitating more nuanced power management than sleep modes alone.</p>



<p>Here are several examples of low-power modes:</p>



<p><strong>Sleep Mode</strong>: In this mode, the device reduces its power consumption by powering down non-essential components while retaining data in memory. The CPU typically enters a low-power state, halting its operation until an external event, such as a button press or an interrupt, wakes it up.</p>



<p><strong>Deep Sleep Mode</strong>: This mode is an even lower power state compared to sleep mode. In deep sleep, the device shuts down most of its non-essential functions, including reducing power to the CPU and peripherals. This mode is commonly used in battery-powered devices to prolong battery life during extended periods of inactivity.</p>



<p><strong>Standby Mode</strong>: This mode is similar to sleep mode but may involve a slightly higher level of power consumption. In this mode, the device reduces power to most components, but some essential functions remain active to enable quick recovery. It&#8217;s commonly used in devices like TVs and remote controls, where rapid responsiveness is necessary.</p>



<h3 class="wp-block-heading"><strong>Clock Gating for Dynamic Power Reduction</strong></h3>



<p>Clock gating is a technique aimed at reducing dynamic power consumption by selectively switching off unnecessary clock signals to registers using control signals, all while ensuring functional correctness. By turning off the clock to idle parts of a device, it conserves power, directing it only to active components and minimizing waste. Implementing clock gating in embedded systems can substantially reduce power usage, particularly in devices with numerous components or intricate functionalities.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img fetchpriority="high" decoding="async" width="891" height="334" src="https://taurotech.com/wp-content/uploads/2024/05/1-1.png" alt="Circuit diagram of registers without clock gating, showing a Multiplexer (MUX) receiving a feedback loop from the DATA_OUT, controlled by an enable (EN) signal and a continuous clock." class="wp-image-3307" style="width:707px;height:auto" srcset="https://taurotech.com/wp-content/uploads/2024/05/1-1.png 891w, https://taurotech.com/wp-content/uploads/2024/05/1-1-768x288.png 768w" sizes="(max-width: 891px) 100vw, 891px" /><figcaption class="wp-element-caption"><strong>Figure 1</strong>:&nbsp;Registers without clock gating</figcaption></figure>
</div>

<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img decoding="async" width="833" height="346" src="https://taurotech.com/wp-content/uploads/2024/05/2.png" alt="Circuit diagram of registers with clock gating, featuring an EN signal and clock passing through a LATCH and AND gate to create a GATED_CLK, reducing power consumption by disabling the clock when data is inactive." class="wp-image-3297" style="width:713px;height:auto" srcset="https://taurotech.com/wp-content/uploads/2024/05/2.png 833w, https://taurotech.com/wp-content/uploads/2024/05/2-768x319.png 768w" sizes="(max-width: 833px) 100vw, 833px" /><figcaption class="wp-element-caption"><strong>Figure 2</strong>: Registers with clock gating</figcaption></figure>
</div>


<p>Typically, the assignment to a register might be conditional, as depicted above. When EN is 0, the clocks to the registers can be stopped otherwise, the registers will switch states on each clock cycle, which dissipates power.</p>



<h3 class="wp-block-heading"><strong>Voltage Scaling Strategies</strong></h3>



<p>Voltage scaling strategies in low-power design involve adjusting the core supply voltage to align with the system’s performance needs. Decreasing voltage decreases power consumption, but it can impact performance, necessitating a careful balance between the two. Techniques like adaptive voltage scaling and dynamic voltage scaling are commonly used in embedded systems to find this balance, often coupled with frequency scaling to maintain acceptable performance levels while reducing power consumption. Dynamic Voltage and Frequency Scaling (DVFS) is a power management technique that adjusts the voltage and frequency of the device&#8217;s CPU dynamically based on workload demands. During periods of low activity, the CPU voltage and frequency are decreased to save power, while they are increased during high-demand tasks to maintain performance. These strategies are particularly crucial in portable devices where battery life is a primary concern.</p>



<h2 class="wp-block-heading"><strong>Design Techniques for Low-Power Embedded Systems</strong></h2>



<p>When thinking about the low-power embedded systems, there is no single rule that applies to every type of requirement. Rather it is a combination of a system design, circuit design and firmware design all combined and working together to deliver the best performance per watt. Embedded engineers construct embedded systems  using various low-power techniques, allowing for adaptable control over device&#8217;s energy usage based on its activities and operating patterns.&nbsp;</p>



<h3 class="wp-block-heading"><strong>Hardware Techniques for Low-Power Design</strong></h3>



<p>In the realm of low-power embedded system design, the selection of hardware components plays a pivotal role. Optimal choices can significantly influence the system&#8217;s overall power consumption. This section will delve into various hardware techniques, such as component selection for low-power embedded systems, employing energy-efficient microcontrollers and processors, and integrating sensors designed for minimal energy consumption.</p>



<p><strong>Energy-efficient component selection</strong>: Picking the right components is crucial for any electronic system, affecting design, layout, and power usage. When it comes to low-power designs, choosing components wisely is even more critical. To reduce power consumption in embedded systems, we need to focus on factors like operating voltage, idle/standby current, and overall efficiency of the components. Opting for parts with lower consumption can significantly cut down on energy usage in the system.</p>



<p><strong>Energy-efficient microcontroller and processor selection</strong>: Embedded systems rely heavily on microcontrollers and processors, and their power efficiency is crucial in determining overall power usage. When choosing a microcontroller or a processor, prioritize components with low operating voltages, effective sleep modes, and power-saving capabilities like clock gating and voltage scaling. Incorporating these features ensures decreased power consumption without compromising performance, making them ideal choices for energy-conscious designs.</p>



<p>One example of a low-power AI accelerator is <a href="https://hailo.ai/products/ai-vision-processors/hailo-15-ai-vision-processor/">Hailo-15</a> that can process multiple video streams in real time on a single device with robust onboard network connectivity. It offers very high AI performance of 26 TOPS and very low power consumption of 2.5W which makes it perfect for AI computing and for mission-critical applications with power consumption reduced by approximately 70% compared to GPU based solutions. Another example is Intel&#8217;s hybrid CPU architecture, which combines “P cores” for high-intensity computational tasks and “E cores” for handling less-intensive tasks while maximizing energy-efficiency, addressing the requirements of modern computing.</p>



<p><strong>Energy-efficient process node selection</strong>: When talking about semiconductor ICs, selecting newer devices with 5nm technology node vs 10nm reduces power by 40%, 3nm improves 45% over 5nm, 14nm reduces power by 50% over 28nm etc. Power efficiency can be dramatically improved when using IC built on top of latest technology node.</p>



<p><strong>Energy-efficient FPGA design</strong>: Field-Programmable Gate Array (FPGA) devices offer the advantage of flexibility and customization in hardware design. In certain applications, this flexibility can lead to power reduction by combining multiple functions into a single FPGA device rather than using discrete components.</p>



<p><strong>Energy-efficient sensor selection</strong>: Sensors play a crucial role in embedded systems, gathering data from the surroundings or user interactions. Opting for sensors with minimal power demands that can transition into low-power modes when inactive is a key. Furthermore, explore sensors equipped with built-in power management functionalities like automatic sleep modes and adjustable sample rates to enhance energy efficiency even further. By selecting sensors with these capabilities, overall power consumption in the system can be significantly reduced, ensuring efficient operation.</p>



<h3 class="wp-block-heading"><strong>Software Techniques for Low-Power Design</strong></h3>



<p>It is generally more effective to begin monitoring the energy consumption as early as possible to access the potential risks of high energy consumption points during the implementation process. When the software is already implemented and integrated, it is usually more difficult and expensive to eliminate such issues. On the other hand, energy consumption levels are directly proportional to computational complexities and improving one will result in indirect improvement of the other. Therefore, it is a good idea to introduce several software development techniques to achieve low-power in embedded systems.</p>



<p><strong>Code optimization</strong>: Optimize algorithms to reduce the overall CPU utilization. Try using efficient algorithms and data structures to reduce the computational complexity. Frequently, there is a tradeoff between faster processing/larger code size vs slower processing/smaller code size. Usually, optimizing a code for speed vs size is a better choice.</p>



<p><strong>Event-Based Task Scheduling</strong>: Events are generated to trigger the system to perform some work. Once the processor finishes the requested task, it goes back to idle state allowing it to remain in low-power modes for longer durations. Incorporating sleep modes in the code putsthe processor or specific peripherals into low-power states during periods of inactivity. Use of efficient task scheduling algorithms minimizes wake-up times and ensures that tasks are executed in a power-efficient manner. </p>



<p><strong>Optimized Data and I/O Access</strong>: Minimizing unnecessary data transfers and using efficient data structures to reduce power consumption during memory access operation such as unnecessary copying of data, especially when large blocks of memory are allocated. Reducing the frequency of I/O operations and using techniques such as batch processing to minimize power consumption during data transfers. Optimizing cache usage to minimize memory accesses and reduce power consumption associated with accessing external memory.</p>



<p><strong>Code Profiling and Optimization</strong>: Profiling code to identify power-hungry sections and optimizing them to reduce power consumption without sacrificing performance is a major area for optimization. Additionally, compilers that optimize code for low-power execution can significantly reduce energy consumption by minimizing unnecessary operations and maximizing sleep modes utilization. Debugging tools that provide insights into power consumption behavior during development help identify and solve power inefficiencies early in the design process.</p>



<h3 class="wp-block-heading"><strong>Using Low-Power Communication Protocols</strong></h3>



<p>The adoption of low-power communication protocols within embedded systems is paramount for achieving energy efficiency while maintaining reliable data transmission. This section aims to offer insights into energy-efficient communication standards and wireless protocols customized for low-power applications.</p>



<h4 class="wp-block-heading"><strong>Wireless Protocols for Low-Power Design</strong></h4>



<p>Wireless communication is gaining popularity in embedded systems for its adaptability and scalability. However, without energy-efficient implementation, it can lead to considerable power consumption. Several wireless protocols, tailored for low-power applications, have emerged to address this concern, including:</p>



<ul class="wp-block-list">
<li><strong>BLE</strong>  is designed for low-power devices and applications with infrequent data transmission.</li>



<li><strong>NB-IoT</strong>  technology is designed to provide low-power wide area network (LPWAN) connectivity for IoT devices. This means that NB-IoT devices have very low-power consumption compared to traditional cellular devices, which enables them to operate on a single battery charge for years.</li>



<li><strong>Z-Wave</strong> is a highly efficient and low-energy technology. While the smart home hub requires a constant power supply to keep the network up and running, many Z-Wave devices operate on battery power alone for a year or more before requiring replacement.</li>



<li><strong>LoRa </strong> is ideal for IoT applications requiring low data rate transmission over long distances.</li>



<li><strong>ZigBee </strong>is a low-power, low-data-rate wireless communication protocol commonly used in home automation and industrial control systems.</li>
</ul>



<h2 class="wp-block-heading"><strong>Conclusion</strong></h2>



<p>Tauro Technologies can dramatically reduce system cost, size, and power requirements through optimized hardware and software design, and meticulous component selection. Our <a href="https://taurotech.com/products/">diverse portfolio </a>of high-efficiency modules and integrated systems is engineered to meet the most demanding industrial standards. <a href="https://taurotech.com/contact-us/">Contact us</a> to explore how we can enhance your systems.</p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/embedded-systems-and-low-power-design/">Embedded Systems and Low-Power Design</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Indoor Location Tracking Systems</title>
		<link>https://taurotech.com/blog/indoor-location-tracking-systems/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=indoor-location-tracking-systems</link>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 08 Mar 2024 21:43:14 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[Hardware design]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[bluetooth]]></category>
		<category><![CDATA[Communication Protocols]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[firmware development]]></category>
		<category><![CDATA[UWB]]></category>
		<category><![CDATA[Wi-Fi]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=3204</guid>

					<description><![CDATA[<p>Indoor Location Tracking Systems What is an indoor location tracking system? Indoor location tracking system locates and tracks the movement of people or objects inside buildings. Indoor location tracking is enabled by indoor positioning systems, a network of electronic devices and computer software used to locate people or objects where and when GPS is inaccurate&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/indoor-location-tracking-systems/">Indoor Location Tracking Systems</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center"><strong>Indoor Location Tracking Systems</strong></h1>



<h3 class="wp-block-heading"><strong>What is an indoor location tracking system?</strong></h3>



<p>Indoor location tracking system locates and tracks the movement of people or objects inside buildings. Indoor location tracking is enabled by indoor positioning systems, a network of electronic devices and computer software used to locate people or objects where and when GPS is inaccurate or fails completely. Furthermore, the accuracy of the GPS is often times less than what&#8217;s required to track objects indoors. Although the terms “indoor location tracking” and “indoor positioning” are interchangeable, there are currently many different types of technologies used to calculate and provide real-time location data.</p>



<p>In this blog post, we&#8217;ll talk about the changing world of indoor location tracking systems, delve into the countless applications in the industry, uncover the benefits they bring, and speculate on the exciting future prospects of indoor location tracking systems.</p>



<h3 class="wp-block-heading"><strong>How do indoor location tracking systems work?</strong></h3>



<p>Indoor location tracking systems, also known as indoor positioning systems (IPS) detect and track object location using a variety of sensors. IPS normally uses transmitters (e.g. tags, badges) and receivers (e.g. beacons)&nbsp; to provide precise location information for tracked assets. Transmitters identify people or assets and can be attached, embedded, or worn. Receivers capture signals from transmitters and send the data to the central management system. These systems are widely used across various industries to track personnel, valuable equipment, materials, and vehicles.</p>



<p>GPS and IPS services are sometimes mixed up due to similar tasks and acronyms. GPS works best outdoors, relying on satellites for location. Indoors, GPS signals are unreliable and lack precision in crowded spaces. Ongoing research may bring new indoor GPS options in the future.</p>



<h3 class="wp-block-heading"><strong>Technologies Used in Indoor Location Tracking Systems</strong></h3>



<p>An indoor positioning system helps find people or objects inside a building. It has two main parts: anchors and position tags. Anchors, like beacons or relays, are placed strategically around the premises. People or things carry position tags. Anchors actively locate these tags or provide location/context information for the device.</p>



<p>There are different ways to track objects indoors:  , Wi-Fi, Magnetic Field Detection, Near Field Communication (NFC), Ultra-wideband (UWB) radio, and UHF RFID. Each method has its own level of accuracy, cost, power usage, and ease of use. Since there&#8217;s no obvious best choice, sometimes it&#8217;s difficult to determine which technology is most suitable. Let&#8217;s look at the most common options.</p>



<ul class="wp-block-list">
<li><strong>Bluetooth Based Indoor Positioning</strong></li>
</ul>



<p>Bluetooth based indoor positioning is a really promising technology for expanding indoor tracking in various fields, such as logistics, healthcare, manufacturing, retail, warehouses, and smart buildings.</p>



<p>Bluetooth proves to be a highly effective choice for indoor localization, offering real-time meter-level accuracy with cost-effective and power-efficient hardware. Its simplified deployment is due to technological standardization, ensuring cross-vendor device compatibility. The widespread adoption of Bluetooth in existing devices further contributes to its ease of use, making it a versatile solution for diverse applications such as logistics, healthcare, manufacturing, retail, warehouses, and smart buildings.</p>



<p>BLE (Bluetooth Low Energy) IPS solution uses beacons or sensors to locate and detect transmitting Bluetooth devices such as track labels, and smartphones throughout the indoor area. Location data obtained from sensors or sent from beacons to mobile devices is then absorbed by various applications and translated into insights that support multiple location-aware use cases.</p>



<p>Bluetooth based solution supports two architectures, one based on the radio signal’s angle of arrival at the anchor point, the other based on its angle of departure.</p>



<p>In AoA based scenario, a mobile device has a tag that sends a Bluetooth signal with direction information. Antenna arrays measure these signals to find the angle of arrival using a network-based engine. The slight phase differences in the signals received by antennas help calculate the angle of arrival.</p>



<p>With AoD, a mobile device receives Bluetooth signals from antenna arrays. The device uses signal measurements to find the direction from which the signal departs the antenna array. The slight phase differences in signals received help calculate the angle of departure given the antenna array geometry is known.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img decoding="async" width="1157" height="672" src="https://taurotech.com/wp-content/uploads/2024/02/1.png" alt="Bluetooth AoA and AoD based Indoor Location Tracking" class="wp-image-3205" style="width:589px;height:auto" srcset="https://taurotech.com/wp-content/uploads/2024/02/1.png 1157w, https://taurotech.com/wp-content/uploads/2024/02/1-768x446.png 768w" sizes="(max-width: 1157px) 100vw, 1157px" /><figcaption class="wp-element-caption"><a href="https://www.bluetooth.com/learn-about-bluetooth/feature-enhancements/direction-finding/https://www.bluetooth.com/learn-about-bluetooth/feature-enhancements/direction-finding/">Figure 1: Bluetooth AoA and AoD based Indoor Location Tracking</a></figcaption></figure>
</div>


<p>To pinpoint a mobile device indoors, a single anchor with multiple antennas can be used to figure out its location relative to the anchor. For higher accuracy, multiple stationary anchors with multi-antenna arrays are employed. By triangulating signals from several anchors and finding their intersection, the exact position of the device can be calculated.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1265" height="742" src="https://taurotech.com/wp-content/uploads/2024/02/2.png" alt="Technical diagram explaining triangulation-based signal positioning for indoor tracking, showing how multiple anchor nodes calculate the angle of a client device to achieve 1-2m accuracy within a 20-30m range." class="wp-image-3206" style="width:575px;height:auto" srcset="https://taurotech.com/wp-content/uploads/2024/02/2.png 1265w, https://taurotech.com/wp-content/uploads/2024/02/2-768x450.png 768w" sizes="(max-width: 1265px) 100vw, 1265px" /><figcaption class="wp-element-caption">Figure 2:  Triangulation based signal positioning</figcaption></figure>
</div>


<ul class="wp-block-list">
<li><strong>Ultra-wideband (UWB) indoor positioning</strong></li>
</ul>



<p>UWB uses a train of impulses instead of a modulated sine wave to transmit information. It&#8217;s perfect for precision applications because of its unique characteristic. Since the pulse rising edge is extremely sharp it allows the receiver to  accurately measure the arrival time of the signal. Furthermore, the pulses are extremely narrow, usually lasting less than two nanoseconds.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1309" height="324" src="https://taurotech.com/wp-content/uploads/2024/02/3.png" alt="Technical comparison of signal types for indoor positioning, showcasing waveform graphs of Narrowband, Ultra Wideband (UWB), UWB with Reflections, and UWB with Noise to demonstrate UWB's superior precision in time-of-flight measurements." class="wp-image-3207" style="width:693px;height:auto" srcset="https://taurotech.com/wp-content/uploads/2024/02/3.png 1309w, https://taurotech.com/wp-content/uploads/2024/02/3-768x190.png 768w" sizes="(max-width: 1309px) 100vw, 1309px" /><figcaption class="wp-element-caption">Figure 3: UWB signaling examples</figcaption></figure>
</div>


<p>The signals&#8217; nature allows UWB pulses to be <a href="https://www.mdpi.com/1424-8220/23/12/5710" type="link" id="https://www.mdpi.com/1424-8220/23/12/5710">resistant to multipath effects</a> and be identified even in noisy environments. UWB has significant ranging capability advantages over traditional narrowband signals due to these traits. Also, due to the strict spectral mask, the transmit power lies at the noise floor, which means that UWB does not interfere with other radio communication systems operating in the same frequency bands. It just increases the overall noise floor, a principle that is very similar to spread spectrum technologies (CDMA).</p>



<ul class="wp-block-list">
<li><strong>Wi-Fi indoor positioning</strong></li>
</ul>



<p>The use of Wi-Fi can enable the detection and tracking of people, devices, and assets. Indoor positioning can be easily calculated using existing Wi-Fi access points. Wi-Fi can be found everywhere, particularly indoors, used by nearly all wireless devices and network infrastructures &#8211; including smartphones, computers, IoT devices, routers, APs, and more. To detect and locate Wi-Fi transmitters, such as smartphones and tracking tags, Wi-Fi indoor positioning solutions employ existing Wi-Fi access points or Wi-Fi enabled sensors. WI-Fi-based positioning systems can use different methods to determine the location of the devices.</p>



<p><strong>Wi-Fi Positioning Using Access Points</strong>: Access points are installed  indoors to locate devices and use already existing Wi-Fi infrastructure. Transmissions from nearby Wi-Fi devices, both on and off the network, can be detected by building APs. The location data is sent to a server and central IPS which are used to determine the position of a device.</p>



<p><strong>Wi-Fi Positioning Using Sensors</strong>: Sensors that are deployed in fixed position indoors passively detect and locate transmissions from smartphones, asset tracking tags and other Wi-Fi devices. The sensor&#8217;s collected location information is then transmitted to a server and incorporated by the central indoor positioning system (IPS).</p>



<p>Wi-Fi positioning methods often rely on the Received Signal Strength Indicator (RSSI) to figure out where the device is located. In applications using RSSI, several Wi-Fi access points, set in fixed positions, pick up signals from transmitting Wi-Fi devices and measure the strength of those signals. The location engine then uses multilateration algorithms to analyze this data and estimate the position of the transmitting devices.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="630" height="549" src="https://taurotech.com/wp-content/uploads/2024/02/4.png" alt="Technical diagram of RSSI-based Wi-Fi positioning, demonstrating trilateration where a smartphone's location is determined by measuring the Received Signal Strength Indicator (RSSI) from three different Wi-Fi access points." class="wp-image-3208" style="width:378px;height:auto"/><figcaption class="wp-element-caption">Figure 4: RSSI based Wi-Fi positioning</figcaption></figure>
</div>


<h3 class="wp-block-heading"><strong>Indoor Location Tracking Benefits</strong></h3>



<ul class="wp-block-list">
<li><strong>Enhanced User Convenience</strong></li>
</ul>



<p>This system expands the comfort of the users in indoor areas, for example, thanks to IPS, users no longer need to indicate their current location,  when moving from one point to another in the indoors. Also, they no longer need to worry about doors, turns or other obstacles, because now they can see them in advance on the map in real-time. Modern day warehouses are like complex living organisms with rapidly moving machinery, products, robots, and personnel. Real-time tracking of the locations of the moving pieces is necessary for efficient and effective functioning on a minute-by-minute basis.</p>



<p>In an application developed by Tauro Technologies used UWB radio based solution to assist firefighters and first-responders on the scene during an incident. Fast, accurate decisions can save lives, keep the first-responders safe and are dependent on accurate real-time information to make mission critical split second decisions. Tauro Technologies developed the hardware and triangulation software system for indoor location tracking to meet those requirements.</p>



<ul class="wp-block-list">
<li><strong>Exclusion of possible human errors</strong></li>
</ul>



<p>Asset tracking also eliminates potential human errors. People can often get tired or have a lapse in judgment and accidentally misplace&nbsp;valuable assets or leave a highly sensitive location unstaffed. Indoor location tracking systems can provide alerts when people or assets leave a predefined area also known as geofencing. Users can opt to receive an email, text or voice notification if someone or something enters or leaves the area.</p>



<ul class="wp-block-list">
<li><strong>Swift Incident Response</strong></li>
</ul>



<p>Indoor location tracking ensures the safety by providing real-time location data during emergencies. Lone workers, when out of communication, can trigger assistance requests, allowing security and emergency services to pinpoint their exact location. Leadership can identify the nearest security officers to a reported incident and efficiently direct them for intervention.</p>



<ul class="wp-block-list">
<li><strong>Location-based marketing</strong></li>
</ul>



<p>The fusion of indoor navigation and positioning creates location-based marketing opportunities. Imagine tailoring a more personalized experience and special offers when shoppers linger at the pasta aisle or greet stadium visitors with personalized messages based on ticket sales data. This not only enhances user engagement but also increases revenue and profits. Offering marketing opportunities through push notifications to exhibitors, sponsors, or partners makes your venue more appealing and has the potential to boost your ROI.</p>



<h3 class="wp-block-heading"><strong>Indoor Location Tracking Use Cases</strong></h3>



<p>The indoor positioning system is a reliable and convenient modern solution that can be used in various positioning solutions such as Asset tracking​, Item finding, Point of interest (POI) information, access control and security, people tracking and consumer behavior analysis, proximity marketing.</p>



<p>Below are some examples of indoor positioning system applications:</p>



<ul class="wp-block-list">
<li><strong>Airport and Hospitality</strong>: Airports and hotels can track heavy equipment, tools, passenger baggage and visitors to improve daily operations, increase safety, and increase customer satisfaction.</li>



<li><strong>Medical Institutions and Healthcare</strong>: High-quality healthcare services allow patients to get the treatments they need without potentially harmful delays. By using this technology, staff, patients, and equipment like beds and wheelchairs can be easily located. It means better attendance checking, effective supervision, and better equipment maintenance are at your fingertips.</li>



<li><strong>Parking</strong>: Indoor location systems can be used to guide drivers to available parking spaces in indoor parking garages or lots.</li>



<li><strong>Warehouse</strong>: Real-time package location, inventory monitoring, and forklift high-precision positioning bring valuable information into the ERP and provide reliability and safety into warehouses.</li>



<li><strong>Museum</strong>: Mobile navigation, precise positioning, and low-cost tags bring new values to tourism location services. IPS can be used to enhance the visitor experience in museums by providing location-based information and interactive exhibits.</li>
</ul>



<h3 class="wp-block-heading"><strong>Challenges of Indoor Location Tracking Systems</strong></h3>



<p>Indoor navigation presents typical challenges in contrast to outdoor environments, where GPS technology is prevalent. The complex task of indoor positioning is made worse by the building layouts, which require specialized solutions to address the unique intricacies of navigating within enclosed spaces.</p>



<p>Here are some representations of the challenges of Indoor Location Tracking Systems and their solutions:</p>



<ul class="wp-block-list">
<li><strong>Complex Building Layouts</strong></li>
</ul>



<p><strong>Challenge</strong>: Large public places are often complicated with many floors, making it hard to keep track of and update the tracking information. These places change a lot due to renovations or temporary setups, so we need navigation systems that can adapt quickly in real-time.</p>



<p><strong>Solution</strong>: Employing indoor mapping tools that facilitate collaboration and crowd-sourced mapping can play a crucial role in preserving accurate and current layouts. These tools empower users and venue owners to actively participate in the mapping process, guaranteeing the continued relevance and precision of the navigation system.</p>



<ul class="wp-block-list">
<li><strong>Signal Interference</strong></li>
</ul>



<p><strong>Challenge</strong>: In areas with high device density, the abundance of devices and wireless networks may cause signal interference. Such interference can compromise the reliability of indoor positioning technologies, leading to navigation inaccuracies and inconsistencies.</p>



<p><strong>Solution</strong>: Implement machine learning techniques to filter noise and interference, enhancing indoor tracking performance. By combining machine learning with BLE and UWB technologies, an adaptive and interference-resistant solution can be achieved, significantly improving indoor tracking performance in challenging environments.</p>



<ul class="wp-block-list">
<li><strong>Battery Consumption</strong></li>
</ul>



<p><strong>Challenge</strong>: Indoor navigation apps often drain device batteries quickly, posing an issue for users without easy access to charging.</p>



<p><strong>Solution</strong>: Optimizing the indoor navigation app’s energy consumption is crucial. Developers should focus on reducing unnecessary background processes and utilizing efficient programming techniques. Additionally, incorporating low-power mode options can help extend device battery life while using the navigation application.</p>



<h3 class="wp-block-heading"><strong>Conclusion</strong></h3>



<p>Tauro Technologies’ experience in RF communications, power management as well as firmware and software design enables the development of reliable and energy efficient location tracking systems. Tauro Technologies has experience in a wide variety of applications including military, scientific, medical, industrial robotics, and communications. <a href="https://taurotech.com/contact-us/">Get in touch</a> with us for more information.</p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/indoor-location-tracking-systems/">Indoor Location Tracking Systems</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Software and Firmware for Embedded Systems</title>
		<link>https://taurotech.com/blog/software-and-firmware-for-embedded-systems/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=software-and-firmware-for-embedded-systems</link>
		
		<dc:creator><![CDATA[Sargis Ghazaryan]]></dc:creator>
		<pubDate>Thu, 09 Nov 2023 03:09:30 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[firmware development]]></category>
		<category><![CDATA[hardware design]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=3060</guid>

					<description><![CDATA[<p>Software and Firmware for Embedded Systems It is common for the majority to get confused with the terms “Embedded firmware” and “Embedded software”. In this article, we will discuss differences and similarities between embedded software and firmware and offer examples to help the reader differentiate between those two. We will kick things off by getting&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/software-and-firmware-for-embedded-systems/">Software and Firmware for Embedded Systems</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center"><strong>Software and Firmware for Embedded System</strong>s</h1>



<p>It is common for the majority to get confused with the terms “Embedded firmware” and “Embedded software”. In this article, we will discuss differences and similarities between embedded software and firmware and offer examples to help the reader differentiate between those two.</p>



<p>We will kick things off by getting to know what an embedded system is and exploring its core components. From there, we&#8217;ll dive into the challenges that developers encounter, from the intricate world of clocking mechanisms to navigating the nuances of firmware and managing power efficiently.</p>



<p><strong>What is an Embedded System?</strong></p>



<p>An embedded system is a computer system with a specific function, composed of a microprocessor, memory, and various input/output peripherals. These systems are often found within larger mechanical or electronic assemblies, hence the term &#8220;embedded&#8221;.</p>



<p>Embedded systems come in various forms, with some being standalone devices, while others function as integral parts of a larger system. </p>



<p>These systems have a presence in a wide range of applications, including industrial machines, consumer electronics, agricultural and processing equipment, automobiles, medical devices, cameras, digital watches, household appliances, airplanes, vending machines, toys, and even modern mobile devices.</p>



<p>Embedded systems consist of hardware and software components. The hardware includes microprocessor or microcontroller, memory, input/output interfaces, timers, and a power supply. These components require software and firmware to bring them to life and function as a system.</p>



<h3 class="wp-block-heading"><strong>Challenges in Embedded Systems</strong></h3>



<p>Embedded product developers grapple with a multitude of challenges as they strive to design and develop efficient and reliable embedded systems. Here, we&#8217;ll explore some of the key challenges:</p>



<h4 class="wp-block-heading">Clocking Challenges</h4>



<ul class="wp-block-list">
<li><strong>Synchronization:</strong> Achieving precise clock synchronization across different components within an embedded system is crucial for seamless operation. Variations in clock timing can lead to synchronization issues and data errors.</li>



<li><strong>Low Power Clocking: </strong>Balancing the need for high-performance clock speeds with power efficiency is a constant challenge, especially in battery-operated devices.</li>



<li><strong>Clock Domain Crossing: </strong>Managing different clock domains within a single system can be complex and requires careful attention to avoid synchronization problems.</li>
</ul>



<h4 class="wp-block-heading">Power Management Challenges</h4>



<ul class="wp-block-list">
<li><strong>Energy Efficiency:</strong> Balancing performance and power consumption is critical, especially in battery-powered devices. Achieving optimal energy efficiency while maintaining functionality is a constant struggle.</li>



<li><strong>Dynamic Power Management: </strong>Efficiently managing power in dynamic workloads, where system components operate at varying levels of activity, is a complex task.</li>



<li><strong>Thermal Management:</strong> Preventing overheating and thermal issues in embedded systems, which can affect performance and longevity, is another challenge.</li>
</ul>



<h4 class="wp-block-heading">Firmware Challenges</h4>



<ul class="wp-block-list">
<li><strong>Complexity: </strong>Developing firmware that is robust, efficient, and adaptable can be a significant challenge. Firmware must handle various tasks, from hardware control to communication protocols.</li>



<li><strong>Security:</strong> Ensuring the security of embedded systems is paramount. Firmware vulnerabilities can expose systems to cyber threats, making robust security measures essential.</li>



<li><strong>Compatibility:</strong> Firmware must often interact with diverse hardware components, requiring compatibility testing and updates as hardware evolves.</li>
</ul>



<h4 class="wp-block-heading"><strong>GUI and Dashboards in Embedded Systems</strong></h4>



<p>Graphical User Interfaces (GUIs) and dashboards play a crucial role in embedded systems, as they provide an interactive and user-friendly way to control and monitor devices and systems with limited computing resources.</p>



<h3 class="wp-block-heading"><strong>What is Embedded Software?</strong></h3>



<p>Embedded software is designed to operate in SWaP optimized non-PC devices. This software is designed for the specific hardware it runs on and often faces some problems due to limited processing power and memory capacity of the device.</p>



<p>A simple example of embedded software can be a controlling of household lighting using an 8-bit microcontroller with minimal memory. It can also be as complex as the software powering modern smart cars. These complex systems manage various electronic components, such as climate control, adaptive cruise control, collision detection, and navigation.</p>



<p>Embedded software and application software differ primarily in their scope and functionality. Embedded software is often serving as the device&#8217;s operating system itself. It operates under strict limitations imposed by the device&#8217;s functionality, which tightly controls the updates and additions to ensure compatibility.</p>



<p>On the other hand, application software provides specific functionality within a general-purpose computer and operates on a complete OS. This separation means that application software has more flexibility and fewer restrictions when it comes to utilizing system resources.</p>



<h3 class="wp-block-heading"><strong>What is Embedded Firmware?</strong></h3>



<p>Firmware serves as a link between the hardware and other software applications that power the system. It is a special type of embedded software that was historically written in read-only memory (ROM) or electrically erasable programmable read-only memory (EEPROM). These earlier forms of firmware were notably unchangeable after initial programming. That is why it is called &#8220;firm&#8221;.</p>



<p>However, technology has evolved and moved toward storing firmware in Flash memory devices. This advancement offers notable advantages, including easier reprogramming and upgrade capabilities as well as significantly increased storage capacity when compared to its ROM and EEPROM predecessors.</p>



<p>Summing up, the primary role of firmware is to initiate device&#8217;s startup process and provide the essential orchestration to support the operation among various hardware components.</p>



<p>Hardware developers use embedded firmware for controlling hardware devices and their functionality similar to the way OS controls the function of software applications. Embedded firmware exists in everything from simple appliances that have computer control, like toasters, to complex tracking systems in missiles. The toaster would likely never need updating but the tracking system sometimes does.</p>



<h3 class="wp-block-heading"><strong>The key difference between Embedded Software vs Firmware</strong></h3>



<p>Firmware is just a specific subset of embedded software. Without the operating system and middleware parts, firmware acts as a directional translator only and cannot work without other software layers working on top of it. It is just one layer, whereas a full embedded layer stack is required for a device to function.</p>



<p>Unlike the application software which is updated often, <a href="https://s3vi.ndc.nasa.gov/ssri-kb/topics/24/">firmware is typically not updated</a> after it is released and  working properly.</p>



<p>If we use a traffic light system analog here is how the embedded system components fit &#8211;  Hardware (red) is the most difficult to update on a working product, firmware (orange) is not impossible but comes with challenges, and software (green) is easy to update and something that is being updated frequently.</p>



<p>Interested to know more? <a href="https://taurotech.com/contact-us/">Get in touch</a> with us for details.</p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/software-and-firmware-for-embedded-systems/">Software and Firmware for Embedded Systems</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Leveraging COM Express and COM-HPC for AI Workloads</title>
		<link>https://taurotech.com/blog/com-express-for-ai-workloads/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=com-express-for-ai-workloads</link>
		
		<dc:creator><![CDATA[Sargis Ghazaryan]]></dc:creator>
		<pubDate>Tue, 18 Jul 2023 05:06:21 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[Hardware design]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Accelerator]]></category>
		<category><![CDATA[Axelera]]></category>
		<category><![CDATA[Blaize]]></category>
		<category><![CDATA[COM Express]]></category>
		<category><![CDATA[COM-HPC]]></category>
		<category><![CDATA[Edge AI]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[Hailo]]></category>
		<category><![CDATA[M.2]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2931</guid>

					<description><![CDATA[<p>Leveraging COM Express and COM-HPC for AI Workloads As the demand for artificial intelligence continues to rise in various industries, from healthcare and finance to manufacturing and autonomous vehicles, industrial computers face the challenge of optimizing AI workloads. Developers are constantly seeking efficient and scalable solutions to solve these challenges. One such solution is using&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/com-express-for-ai-workloads/">Leveraging COM Express and COM-HPC for AI Workloads</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center">Leveraging COM Express and COM-HPC for AI Workloads</h1>



<p>As the demand for artificial intelligence continues to rise in various industries, from healthcare and finance to manufacturing and autonomous vehicles, industrial computers face the challenge of optimizing AI workloads. Developers are constantly seeking efficient and scalable solutions to solve these challenges. One such solution is using COM Express , a standardized form factor that can be used as a flexible computing platform for various AI workloads.</p>



<p>With the ability to choose from wide variety of CPUs and the flexibility to right-size CPU to target various AI workloads, COM Express empowers organizations to create efficient, scalable, and cost-effective AI solutions. In addition to harnessing the advantages of COM Express, developers can leverage additional AI accelerators to further optimize the solutions.  COM-HPC,  a new specification, further enables enhanced performance and scalability for high-performance computing applications.</p>



<p>The Intel Alder Lake x86 CPU is an ideal solution for COM Express modules targeting AI workloads due to built-in AI acceleration with Intel Deep Learning Boost technology. This integrated AI capability allows for efficient execution of AI workloads, such as neural network inference and deep learning tasks. By leveraging the built-in AI accelerator, COM Express modules based on Alder Lake can provide optimized performance for AI applications without the need for additional external accelerators.</p>



<h3 class="wp-block-heading"><strong>What is COM Express?</strong></h3>



<p>COM Express is a highly integrated and compact computer on module that is designed to offer scalability and flexibility by providing a standardized form factor and interface for integrating different processor architectures and I/O configurations. Introduced by the PCI Industrial Computer Manufacturers Group in 2005, COM Express provides a single circuit board with integrated RAM.</p>



<p>This family of modular, small form factor modules has gained significant traction in various industries, including automation, gaming, retail, transportation, robotics, and medical fields. With eight different types, four sizes, and three major revisions, COM Express promotes vendor technology reuse while catering to mid-range edge processing and networking requirements.</p>



<p>The key differentiator of COM Express from traditional single-board computers (SBCs) lies in its ability to plug off-the-shelf modules into custom carrier boards designed for specific applications. This enables an upgrade path for the CPUs while keeping the carrier board intact. By using a custom COM Express carrier board, all necessary signals can be efficiently routed to the peripherals, while COM Express processor modules serve as the main controller. These advanced features ensure the versatility and adaptability of COM Express for diverse application requirements.</p>



<h3 class="wp-block-heading"><strong>Comparing COM-HPC with COM Express</strong></h3>



<p>COM-HPC is an evolution of the COM Express standard, uniquely tailored to address the demands of high-performance computing applications. With its focus on enhanced performance, scalability, and advanced features, COM-HPC caters to the same applications and markets as COM Express, but with notable differentiators. It boasts higher-end CPUs, expanded memory capacity, and increased and faster I/O capabilities. It&#8217;s essential to emphasize that COM-HPC does not aim to replace COM Express, rather the two standards exist as distinct entities in the field of embedded computing, offering developers a broader spectrum of choices to meet specific application requirements.</p>



<p>COM-HPC brings significant improvements over COM Express for AI workloads, particularly in terms of PCIe lanes and PCIe generation support:</p>



<ul class="wp-block-list">
<li>Increased PCIe Lanes: One of the key advantages of COM-HPC over COM Express is the availability of more PCIe lanes. COM Express has a limited number of PCIe lanes, which can restrict the connectivity options and the number of I/O interfaces or accelerators that can be integrated. In contrast, COM-HPC modules provide a higher number of PCIe lanes, allowing for more extensive connectivity and the integration of multiple high-speed devices.</li>



<li>PCIe Gen4/5 Support: Another crucial enhancement in COM-HPC is the support for PCIe Gen4 and Gen5, whereas COM Express supports up to PCIe Gen3. PCIe Gen4 and Gen5 offer higher data transfer rates and improved bandwidth compared to Gen3. This is particularly advantageous for AI workloads that require fast data movement between the CPU, GPU, storage devices, and other peripherals.</li>
</ul>



<p>In summary,  newer generation processors, paired with higher data rates, dramatically lower the size, power and cost requirements of the systems required to perform the AI tasks.</p>



<h3 class="wp-block-heading"><strong>The Advantages of Choosing COM Express for AI Workloads</strong></h3>



<p>COM Express offers several distinct advantages when it comes to AI workloads. As a flexible and scalable platform, it provides developers to adapt their AI systems according to specific requirements like CPU performance, power requirements. They can then design a carrier board that integrates the module with additional AI-specific components, such as AI accelerators. Below is the block diagram example of COM Express platform with AI Accelerator.</p>


<div class="wp-block-image is-resized">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="2044" height="2164" src="https://taurotech.com/wp-content/uploads/2023/07/Block-Diagram.drawio.png" alt="Block diagram of a COM Express Module architecture showing connections to an AI Accelerator, PCIe slots, MiniPCIe for LTE/WiFi, and I/O ports like HDMI, Dual USB 3.0, and Dual GbE RJ-45." class="wp-image-2955" style="aspect-ratio:0.9445378151260504;width:511px;height:auto" srcset="https://taurotech.com/wp-content/uploads/2023/07/Block-Diagram.drawio.png 2044w, https://taurotech.com/wp-content/uploads/2023/07/Block-Diagram.drawio-768x813.png 768w, https://taurotech.com/wp-content/uploads/2023/07/Block-Diagram.drawio-1451x1536.png 1451w, https://taurotech.com/wp-content/uploads/2023/07/Block-Diagram.drawio-1934x2048.png 1934w" sizes="(max-width: 2044px) 100vw, 2044px" /><figcaption class="wp-element-caption">&nbsp;<strong>Figure 1: </strong>COM Express AI Compute System</figcaption></figure>
</div>


<p>Here are the key advantages of choosing COM Express (or COM-HPC) for AI workloads:</p>



<ul class="wp-block-list">
<li>Flexibility and Scalability: COM Express allows developers to choose from a wide range of CPU options. Such kind of flexibility allows them to choose the module that best matches the computing needs of their AI workloads. Whether it&#8217;s a complex neural network inference or deep learning task, the platform can be customized to deliver optimal performance.</li>



<li>Modular Design: COM Express follows a modular design approach with a separate CPU module and carrier board. This modularity simplifies system customization and future upgrades. Developers can easily swap out or upgrade the CPU module without redesigning the entire system, saving time and effort while adapting to evolving AI requirements.</li>



<li>Streamlined Integration: COM Express adheres to industry-standard form factors and interfaces, ensuring compatibility across different vendors. This standardized approach simplifies system integration, reducing development complexity and time to market. Developers can focus on optimizing their AI algorithms and software, confident that the hardware integration will be seamless.</li>



<li>Rich Connectivity Options: COM Express provides a wide array of interfaces, including Ethernet, USB, PCIe, and DisplayPort interfaces. These interfaces enable effortless integration with various peripherals, sensors, and external devices commonly used in AI applications. The rich connectivity options enhance data I/O capabilities, facilitating efficient communication and interaction within the AI system.</li>



<li>Long-Term Availability and Support: COM Express offers long-term availability and support, ensuring continuity for AI deployments. This is particularly crucial for industries that rely on stable and long-lasting AI systems. With a consistent platform and extended availability, developers can plan for long-term deployment and maintenance, with access to software updates and technical assistance.</li>



<li>Cost Optimization: COM Express provides a cost-effective solution for AI workloads. By leveraging COM Express, developers can save on development costs and reduce time to market. The modular design allows for efficient resource allocation, ensuring optimal performance while minimizing unnecessary expenses.</li>



<li>Time to Market:  Since the computer modules are widely available in the embedded marketplace, COM Express enables developers to focus on the IO needs, the addition of accelerators, the AI models and application software.</li>
</ul>



<h3 class="wp-block-heading"><strong>Real-World Applications of COM Express for AI Workloads</strong></h3>



<p>As stated above, COM Express modules offer immense potential for developers to optimize AI workloads on industrial computers, leading to transformative impacts and various implications for cost-effective solutions and large-scale deployments. Let&#8217;s delve into real-world examples and insights to showcase the significance of this optimization trend.</p>



<p>In the field of autonomous vehicles, this optimization trend allows autonomous vehicles to navigate complex environments, enhancing safety and efficiency. By leveraging COM Express modules, developers can achieve cost-effective solutions by utilizing existing industrial computers and upgrading them with optimized AI capabilities, resulting in large-scale deployments of autonomous vehicles across transportation networks.</p>



<p>Industrial automation is another area where COM Express systems can revolutionize AI workloads. By optimizing AI algorithms on industrial computers using COM Express modules, developers can achieve significant cost savings and efficiency gains in manufacturing processes. For instance, AI-powered computer vision systems can inspect and detect defects in real-time, improving quality control and reducing production costs. The use of COM Express modules enables industrial computers to handle these AI workloads effectively, making cost-effective solutions viable for large-scale deployment in manufacturing facilities.</p>



<p>In the healthcare sector, COM Express systems can optimize AI workloads on industrial computers to improve diagnostics, patient monitoring, and personalized treatment. For example, by leveraging COM Express systems, developers can enable industrial computers to process complex medical imaging data and apply AI algorithms for more accurate and timely diagnosis. This optimization trend in AI workloads allows healthcare providers to deliver cost-effective, benefiting patients globally.</p>



<h3 class="wp-block-heading"><strong>What to choose</strong></h3>



<p>AI accelerators are paired with COM Express module on the carrier as separate modules or integrated directly into the carrier board&#8217;s design. This modular approach provides scalability and flexibility, allowing system designers to customize AI processing capabilities to meet the specific requirements of their applications. It also enables easy upgrades or replacements of AI accelerators without having to modify the entire system, making it both cost-effective and future-proof. AI accelerators such as <a href="https://www.blaize.com/">Blaize</a>, <a href="https://hailo.ai/">Hailo</a> or <a href="https://www.axelera.ai/">Axelera</a> paired with COM Express module can provide significant benefits. For example, combining Axelera M.2 AI Edge accelerator module with COM Express Carrier board can achieve up to 120 TOPS of AI performance with the flexibility of switching between the CPU families for optimized compute needs.</p>



<p>These accelerators are specifically designed to enhance AI workloads and provide optimized compute capabilities compared to GPUs. This level of compute power can greatly benefit vision processing applications, which often require intensive computations for tasks such as object detection and classification.</p>



<h3 class="wp-block-heading"><strong>Conclusion</strong></h3>



<p>COM Express and COM-HPC offer flexible and scalable platform to enable various AI workloads, allowing developers to customize their systems based on CPU performance, power requirements, and I/O interfaces. CPUs like Intel Alder Lake integrated into COM Express modules provide efficient AI execution, integrated graphics performance, enhanced compute density, ecosystem support, and broad connectivity options. The combination of the CPU with optional AI Accelerator delivers optimized performance, reducing costs and enabling efficient large-scale AI deployments.</p>



<p>With the Tauro Technologies’ team of electronic engineers and designers it becomes possible to design and deploy comprehensive AI processing systems based on x86 and ARM CPUs paired with various AI Accelerators. This strategic approach helps bring down costs and ensures the right balance between compute power and AI processing needed for the system.  We can customize the I/O as well as the footprint to fit your application requirements.</p>



<p>Interested to know more?&nbsp;<a href="https://taurotech.com/contact-us/" target="_blank" rel="noreferrer noopener">Get in touch</a>&nbsp;with us for details.</p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/com-express-for-ai-workloads/">Leveraging COM Express and COM-HPC for AI Workloads</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Dual Orin Controller: The Ideal Safety-Critical Platform for Autonomous Vehicles</title>
		<link>https://taurotech.com/blog/dual-orin/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=dual-orin</link>
		
		<dc:creator><![CDATA[Sargis Ghazaryan]]></dc:creator>
		<pubDate>Fri, 26 May 2023 02:14:25 +0000</pubDate>
				<category><![CDATA[Automotive]]></category>
		<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[ADAS]]></category>
		<category><![CDATA[AGX Orin]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Camera]]></category>
		<category><![CDATA[Dual AGX Orin]]></category>
		<category><![CDATA[Dual Orin]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[Ethernet]]></category>
		<category><![CDATA[GMSL]]></category>
		<category><![CDATA[hardware design]]></category>
		<category><![CDATA[nvidia]]></category>
		<category><![CDATA[Robotics]]></category>
		<category><![CDATA[SOM]]></category>
		<category><![CDATA[trends]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2764</guid>

					<description><![CDATA[<p>Dual Orin Controller: The Ideal Safety-Critical Platform for Autonomous Vehicles As technology evolves, the automotive industry is constantly seeking ways to make driving safe, reliable, and autonomous. In this blog post, we’ll explore the features, functionality, and the impact that a platform based on dual NVIDIA&#8217;s AGX Orin modules offers for the future of vehicle&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/dual-orin/">Dual Orin Controller: The Ideal Safety-Critical Platform for Autonomous Vehicles</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center"><strong>Dual Orin Controller: The Ideal Safety-Critical Platform for Autonomous</strong> Vehicles</h1>



<p>As technology evolves, the automotive industry is constantly seeking ways to make driving safe, reliable, and autonomous. In this blog post, we’ll explore the features, functionality, and the impact that a platform based on dual NVIDIA&#8217;s AGX Orin modules offers for the future of vehicle safety during operation. Additionally, we will elaborate on the concept of safety-critical systems and highlight the distinctions between safety-critical functionalities and ADAS (Advanced Driver Assistance System).</p>



<p>The Jetson AGX Orin is designed for advanced robotics and AI edge applications for manufacturing, logistics, retail, service, agriculture, smart city, healthcare, and life science.  Dual Orin (2 Orin devices on the same motherboard) offers system redundancy, which refers to the presence of backup or duplicate components that can take over in the event of a failure in the primary system.  </p>



<p>ADAS provides driver assistance and convenience, but it is not solely responsible for critical functions that impact safety. Safety-critical functions encompass components directly involved in critical functions such as braking and collision avoidance. Safety-critical systems follow strict standards to ensure reliable operation. </p>



<h3 class="wp-block-heading"><strong>What is Orin?</strong></h3>



<p>The NVIDIA Jetson Orin solution is a SOM (system-on-module) with CPU, GPU, memory, power management, and various high-speed interfaces embedded on a single board. NVIDIA Jetson brings accelerated AI performance to the edge in a power-efficient and compact form factor. The Jetson family of modules all use the same NVIDIA CUDA-X<img src="https://s.w.org/images/core/emoji/17.0.2/72x72/2122.png" alt="™" class="wp-smiley" style="height: 1em; max-height: 1em;" /> software, and support cloud-native technologies like containerization and orchestration to build, deploy, and manage AI at the edge.</p>



<p>NVIDIA’s Orin platform (SoC) has three series for its Jetson products:</p>



<ul class="wp-block-list">
<li><a href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/">Jetson AGX Orin series</a></li>



<li><a href="https://docs.nvidia.com/jetson/archives/r35.3.1/DeveloperGuide/text/HR/JetsonModuleAdaptationAndBringUp/JetsonOrinNxNanoSeries.html">Jetson Orin NX series</a></li>



<li><a href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit/">Jetson Orin Nano series</a></li>
</ul>



<p>NVIDIA Jetson Orin modules provide 275 TOPS of AI performance and which increases the performance 8 times compared to Jetson Xavier for multiple concurrent AI inference pipelines, in addition to high-speed interface support for multiple sensors.</p>



<p>One of the major features of NVIDIA Jetson Orin is the DLA (Deep Learning Accelerator) which supports next-generation NVDLA 2.0 with 9x the performance of NVDLA 1.0. It enables the GPU to run more complex networks and dynamic tasks.</p>



<h3 class="wp-block-heading"><strong>A Comparison of Orin with Traditional CPU/GPU</strong></h3>



<p>Now, let&#8217;s delve into a comparison between traditional processors and Orin by examining the following key features:</p>



<ul class="wp-block-list">
<li><strong>Architecture</strong></li>
</ul>



<p>NVIDIA Jetson Orin is designed specifically for autonomous machines and edge computing. Jetson AGX Orin modules feature the NVIDIA Orin SoC with a NVIDIA Ampere architecture GPU, Arm® Cortex®-A78AE CPU, next-generation deep learning and vision accelerators, and a video encoder and a video decoder making it highly optimized for tasks like computer vision, deep learning, and robotics.</p>



<p>Traditional CPUs (Central Processing Units) and GPUs (Graphics Processing Units) are more general-purpose processors designed for a wide range of computing tasks, including running operating systems, executing applications, and performing graphics rendering.</p>



<ul class="wp-block-list">
<li><strong>Power Efficiency</strong></li>
</ul>



<p>NVIDIA Jetson AGX Orin series modules are designed with a high-efficiency Power Management Integrated Circuit (PMIC), voltage regulators, and a power tree to optimize power efficiency. It strikes a balance between performance and energy consumption, allowing for longer battery life and reduced power requirements in embedded systems.</p>



<p>While traditional CPUs and GPUs can offer high computational power, they are generally more power-hungry compared to specialized SoCs like Jetson Orin. They are commonly found in desktops, servers, and workstations where power consumption is less constrained.</p>



<ul class="wp-block-list">
<li><strong>AI Performance</strong></li>
</ul>



<p>The NVIDIA Jetson AGX Orin series provides server class performance, delivering up to 275 TOPS of AI performance for powering and managing autonomous systems. Its high performance is ideal for tasks like object detection, image recognition, natural language processing, and autonomous navigation.</p>



<p>Traditional CPUs and GPUs can also handle AI workloads, but they do not provide the same level of performance or efficiency as AI-focused modules like Jetson Orin. GPUs, in particular, have been utilized for parallel processing in deep learning tasks, but they are less power-efficient compared to specialized AI chips.  </p>



<p>In addition, the Jetson Orin modules are extremely compact, enabling the compute platform to have reduced size and weight &#8211; critical for autonomous robots and UAVs.</p>



<ul class="wp-block-list">
<li><strong>Software Ecosystem</strong></li>
</ul>



<p>NVIDIA Jetson Orin is part of NVIDIA&#8217;s Jetson platform, which offers a comprehensive software stack, including drivers, libraries, and frameworks specifically optimized for AI and autonomous applications. It supports popular AI frameworks like TensorFlow, PyTorch, and CUDA, providing developers with familiar tools and resources.</p>



<p>Traditional CPUs and GPUs also have a mature and extensive software ecosystem with support for a wide range of operating systems, development tools, and programming languages. They are compatible with various software frameworks, including those used for AI, but may require additional configuration and optimization for specific AI workloads.</p>



<h3 class="wp-block-heading"><strong>Key differences between NVIDIA Orin and Xavier</strong></h3>



<p>NVIDIA Jetson AGX Xavier and NVIDIA Jetson AGX Orin have the same physical footprint and are pin compatible while also being in the same price range with one major difference that the Orin offers much higher performance.</p>



<p>The biggest change change is moving from Nvidia’s Carmel CPU clusters to the ARM Cortex-A78AE on Jeston AGX Orin. <br>The Orin CPU complex is made up of 12 2.2 GHz cores, each with 64KB Instruction L1 Cache and 64KB Data Cache, and 256 KB of L2 Cache. This enables x1.85 performance increased compared to the eight core Carmel CPU on Jetson AGX Xavier.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><img loading="lazy" decoding="async" width="1221" height="489" src="https://taurotech.com/wp-content/uploads/2023/05/Screenshot-2023-05-17-193355.png" alt="Jetson AGX Xavier vs Jetson AGX Orin Performance Comparison" class="wp-image-2773" style="width:1221px;height:489px" srcset="https://taurotech.com/wp-content/uploads/2023/05/Screenshot-2023-05-17-193355.png 1221w, https://taurotech.com/wp-content/uploads/2023/05/Screenshot-2023-05-17-193355-768x308.png 768w" sizes="(max-width: 1221px) 100vw, 1221px" /><figcaption class="wp-element-caption">Figure 1: Jetson AGX Xavier vs Jetson AGX Orin Performance Comparison</figcaption></figure>
</div>


<p>Jetson AGX Orin modules deliver an AI performance that can reach 275 TOPS with up to 64 GB of memory, compared to 32 TOPS with up to 32 GB of memory for Jetson Xavier.</p>



<p>Jetson AGX Orin 64GB has 2048 CUDA cores and 64 Tensor cores with up to 170 Sparse TOPS of INT8 Tensor compute, and up to 5.3 FP32 TFLOPs of CUDA compute, while Jetson Xavier has only up to 1.4 FP32 TFLOPs of CUDA compute. Ampere GPU brings support for sparsity, a fine-grained compute structure that doubles throughput and reduces memory usage.</p>



<p>DLA 2.0 provides a highly energy efficient architecture. With this new design, NVIDIA increased local buffering for even more efficiency and reduced DRAM bandwidth. DLA 2.0 additionally brings a set of new features including structured sparsity, depth wise convolution, and a hardware scheduler. This enables up to 105 INT8 Sparse TOPs total on Jetson AGX Orin DLAs compared with 11.4 INT8 Dense TOPS total on Jetson AGX Xavier DLAs.</p>



<p>The 12-core CPU on Jetson AGX Orin 64GB enables 1.85 times the performance compared to the 8-core NVIDIA Carmel CPU on Jetson AGX Xavier. Customers can use the enhanced capabilities of the Cortex-A78AE including the higher performance and enhanced cache to optimize their CPU implementations.</p>



<p>Jetson AGX Orin modules bring support for 1.5 times the memory bandwidth and 2 times the storage of Jetson AGX Xavier, enabling 32GB or 64GB of 256-bit LPDDR5 and 64 GB of eMMC. The DRAM supports a max clock speed of 3200 MHz, with 6400 Gbps per pin, enabling 204.8 GB/s of memory bandwidth.</p>



<p>The combination of NVIDIA&#8217;s processing capabilities and power efficiency, along with its safety-critical features, makes it the ideal solution for autonomous applications.</p>



<h3 class="wp-block-heading"><strong>Safety Critical Software in Automotive Safety</strong></h3>



<p>Functional safety in processor-based systems is particularly critical in automotive applications. Apart from the ongoing shift towards autonomous vehicles, cars are increasingly dependent on microprocessors to carry out essential operations and must have redundant systems to enable safety in the event of a component failure.</p>



<p>ISO 26262 serves as the globally recognized standard for ensuring functional safety in the automotive industry. This international standard encompasses both the hardware and software components of a vehicle&#8217;s electrical and electronic (E/E) systems. Throughout the development process, ISO 26262 outlines specific requirements that must be fulfilled to ensure the safety-related functionality of the system, along with the corresponding processes, methodologies, and tools. By adhering to the ISO 26262 standard, manufacturers can ensure that sufficient safety measures are implemented and maintained throughout the entire lifespan of the vehicle.</p>



<p>ISO 26262 offers comprehensive guidelines on determining acceptable risk levels for systems or components and documenting the testing process. It encompasses the following key aspects:</p>



<ul class="wp-block-list">
<li>Defines an automotive safety lifecycle that covers management, development, production, operation, service, and decommissioning stages, allowing for customization of activities during each phase.</li>



<li>Implements an automotive-specific risk-based approach for classifying risk levels known as Automotive Safety Integrity Levels (ASILs).</li>



<li>Utilizes ASILs to specify the required safety measures for achieving an acceptable residual risk.</li>



<li>Establishes requirements for validation and confirmation measures to ensure the attainment of a satisfactory level of safety.y</li>
</ul>



<h3 class="wp-block-heading"><strong>Dual AGX Orin</strong> Controller Overview</h3>



<p>The Dual AGX Orin system offers superior computing power compared to a single Orin solution, making it preferable for specific applications that require higher computational power and redundancy.</p>



<p>The Dual Orin Controller&#8217;s computational capacity enables it to handle multiple complex tasks simultaneously. This capability is particularly valuable in scenarios where there is a need for concurrent processing of multiple data streams from various sensors, making it suitable for advanced autonomous machines, commercial vehicles, unmanned distribution vehicles, and unmanned cleaning vehicles.</p>



<p>In safety-critical applications, redundancy is essential to ensure system reliability. The Dual Orin Controller&#8217;s utilization of two AGX Orin modules provides a level of redundancy and failover capabilities. If one module encounters an issue, the other can continue functioning, minimizing the risk of critical system failures and improving the overall reliability of the autonomous machine.</p>



<h3 class="wp-block-heading"><strong>Tauro Technologies</strong> TT300 Dual AGX Orin Controller</h3>



<p>Tauro Technologies&#8217; TT300 Dual AGX Orin compute platform provides exceptional computing power, low energy consumption, in a compact form factor. </p>



<p>With up to 400/550 TOPS of AI performance this product can be used in autonomous vehicles, UAVs and robotics. The product is designed for high reliability and redundancy, provides multi-sensor clock synchronization with sub-nanosecond accuracy and millisecond latency for precise timing.</p>



<p>Let&#8217;s take a closer look at TT300 key features:</p>



<ul class="wp-block-list">
<li><strong>Dual Orin Controllers 550 TOPS</strong></li>
</ul>



<p>The TT300 board is equipped with two powerful Orin controllers, delivering combined processing power of 550 TOPS. This immense computing power enables lightning-fast data processing and analysis, making it ideal for handling complex AI workloads.</p>



<ul class="wp-block-list">
<li><strong>Infineon TC397 Safety MCU</strong></li>
</ul>



<p>Ensuring the highest levels of safety and reliability, the TT300 board incorporates the Infineon TC397 safety microcontroller to support safety requirements up to ASIL-D. This MCU plays a crucial role in safeguarding the system against potential hazards and maintaining the integrity of critical operations.</p>



<ul class="wp-block-list">
<li><strong>100Base-T1/1000Base-T1 Ethernet</strong></li>
</ul>



<p>To facilitate efficient and reliable data communication, the TT300 board is equipped with both 100Base-T1 and 1000Base-T1 Ethernet interfaces. These interfaces enable fast and secure data transfer, ensuring smooth integration into existing vehicle network infrastructures.</p>



<ul class="wp-block-list">
<li><strong>Wi-Fi/4G/5G</strong></li>
</ul>



<p>TT300 board supports Wi-Fi, 4G LTE and 5G connectivity, enabling seamless wireless communication and remote access. Whether you need to stream data, receive updates, or control the board remotely, these connectivity features have you covered.</p>



<ul class="wp-block-list">
<li><strong>GMSL2 Interface for Hi-Res Cameras</strong></li>
</ul>



<p>The TT300 board features a GMSL2 interface, enabling reliable connection with high-resolution cameras. This interface supports the transmission of data between the controller and cameras, ensuring high-quality image and video feed for AI applications such as ADAS, object detection, tracking, and recognition.</p>



<p>GMSL cameras are becoming a defacto standard in automotive industry where high data rates and long-distance support is required, addressing the need to transport higher video data rates in automotive video systems. <br>In addition to high bandwidth transmission, long-distance support, and low latency, GMSL cameras also come with the following features:</p>



<ul class="wp-block-list">
<li>Virtual channel support</li>



<li>GMSL1 and GMSL2 backward compatibility</li>



<li>Video duplication</li>



<li>Automatic Repeat Request (ARQ) feature</li>



<li>Compatibility with ARM platforms like the NVIDIA Jetson series</li>
</ul>



<h3 class="wp-block-heading"><strong> I/O</strong> Capabilities</h3>



<p>TT300 is powered by two NVIDIA Jetson AGX Orin modules and Infineon TC397 safety MCU enables the design to meet ASIL-D highest reliability requirements. The I/O capabilities of the product include automotive as well as industrial ethernet interfaces, USB, wireless connectivity over 4G/5G and Wi-Fi, GMSL camera and LVDS radar interfaces for ADAS applications, as well as CAN and LIN interfaces for automotive and robotics applications routed to CMC connector. Wide selection of interfaces and customization options makes this device easily adaptable to various use cases and application scenarios.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="3795" height="632" src="https://taurotech.com/wp-content/uploads/2023/05/IMG_3406.png" alt="TT300 Dual AGX Orin Controller Front I/O" class="wp-image-2846" srcset="https://taurotech.com/wp-content/uploads/2023/05/IMG_3406.png 3795w, https://taurotech.com/wp-content/uploads/2023/05/IMG_3406-768x128.png 768w, https://taurotech.com/wp-content/uploads/2023/05/IMG_3406-1536x256.png 1536w, https://taurotech.com/wp-content/uploads/2023/05/IMG_3406-2048x341.png 2048w" sizes="(max-width: 3795px) 100vw, 3795px" /><figcaption class="wp-element-caption"><a href="https://taurotech.com/products/nvidia-jetson-agx-orin/tt300-dual-agx-orinplatform/">Figure 2: TT300 Dual AGX Orin Controller Front I/O</a></figcaption></figure>
</div>

<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="3568" height="618" src="https://taurotech.com/wp-content/uploads/2023/05/IMG_3414.png" alt="TT300 Dual AGX Orin Controller Rear I/O" class="wp-image-2847" srcset="https://taurotech.com/wp-content/uploads/2023/05/IMG_3414.png 3568w, https://taurotech.com/wp-content/uploads/2023/05/IMG_3414-768x133.png 768w, https://taurotech.com/wp-content/uploads/2023/05/IMG_3414-1536x266.png 1536w, https://taurotech.com/wp-content/uploads/2023/05/IMG_3414-2048x355.png 2048w" sizes="(max-width: 3568px) 100vw, 3568px" /><figcaption class="wp-element-caption"><a href="https://taurotech.com/products/nvidia-jetson-agx-orin/tt300-dual-agx-orinplatform/">Figure 3: TT300 Dual AGX Orin Controller Rear I/O</a></figcaption></figure>
</div>


<h3 class="wp-block-heading"><strong>Conclusion</strong></h3>



<p>Tauro Technologies’ TT300 is one of the industry&#8217;s first platforms to offer the NVIDIA Jetson Orin AGX in a redundant safety-critical setting. This is an ideal system for self-driving vehicles in automotive, mining, and defense sectors as well as autonomous robots and UAVs that require exceptional performance and functional safety certification.<br>We can customize the I/O as well as the product packaging to fit your application requirements – <a href="https://taurotech.com/contact-us/">contact us</a> for details.</p>



<p>n</p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/dual-orin/">Dual Orin Controller: The Ideal Safety-Critical Platform for Autonomous Vehicles</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>5G Rollout and How It Will Empower the Future of IoT</title>
		<link>https://taurotech.com/blog/5g-rollout-and-iot/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=5g-rollout-and-iot</link>
		
		<dc:creator><![CDATA[Paul Kuepfer]]></dc:creator>
		<pubDate>Tue, 03 Jan 2023 19:57:19 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[IoT]]></category>
		<category><![CDATA[Communication Protocols]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[hardware design]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2402</guid>

					<description><![CDATA[<p>5G Rollout and How It Will Empower the Future of IoT The mass rollout of 5G mobile networks is supposed to play a decisive role in driving the Fourth Industrial Revolution (Industry 4.0), digital transformation, and the expansion of IoT (Internet of Things) and IIoT (Industrial Internet of Things) solutions around the world. The transition&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/5g-rollout-and-iot/">5G Rollout and How It Will Empower the Future of IoT</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="2402" class="elementor elementor-2402" data-elementor-post-type="post">
						<section class="elementor-section elementor-top-section elementor-element elementor-element-6a7c1a72 elementor-section-boxed elementor-section-height-default elementor-section-height-default" data-id="6a7c1a72" data-element_type="section" data-e-type="section">
						<div class="elementor-container elementor-column-gap-default">
					<div class="elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-3c3f768d" data-id="3c3f768d" data-element_type="column" data-e-type="column">
			<div class="elementor-widget-wrap elementor-element-populated">
						<div class="elementor-element elementor-element-439dcc24 elementor-widget elementor-widget-text-editor" data-id="439dcc24" data-element_type="widget" data-e-type="widget" data-widget_type="text-editor.default">
				<div class="elementor-widget-container">
									
<h1 class="has-text-align-center wp-block-heading">5G Rollout and How It Will Empower the Future of IoT</h1>

<p>The mass rollout of 5G mobile networks is supposed to play a decisive role in driving the Fourth Industrial Revolution (Industry 4.0), digital transformation, and the expansion of IoT (Internet of Things) and IIoT (Industrial Internet of Things) solutions around the world. The transition to 5G is still in its early stages as global cellular phone companies started to deploy the first fifth-generation networks just recently, in 2019.</p>

<p>Even though 5G has been among the most frequently mentioned technological trends of the near future for several years, this concept still needs to be discovered and is often misunderstood. That is why now is a perfect time to talk about 5G rollout in more detail.</p>

<h2 class="wp-block-heading">What is 5G?</h2>

<p>5G is the fifth generation of broadband cellular networks. This new technology standard is supposed to be a successor to 4G networks that provide connectivity to the majority of currently used mobile devices and communications. Just like with the cellular networks of previous generations, the service area in 5G is divided into small geographical areas called cells. Mobile devices connected to a 5G cell can communicate with each other by radio waves on frequency channels specifically assigned by a base station. Base stations, in turn, are connected either wirelessly or by an optical fiber. When a mobile device is moving from one 5G cell’s area coverage, it gets automatically switched to another.</p>

<h2 class="wp-block-heading">5G networks to reach 4.4 bln subscriptions by 2027</h2>

<p>5G cellular networks are expected to support up to a million devices per square kilometer. According to a <a href="https://www.rcrwireless.com/20220901/5g/carriers-add-nearly-70-million-5g-subs-globally-q2-ericsson#:~:text=According%20to%20Ericsson's%20report%2C%205G,total%20of%204.4%20billion%20subscriptions.">recent report</a> by Ericsson, 5G networks are forecast to account for almost half of mobile subscriptions globally by 2027, reaching a total of 4.4 billion subscriptions.</p>

<p>According to the survey, 5G is scaling faster than all previous mobile technology generations, as about a quarter of the world’s population currently has access to 5G coverage.</p>

<p>As of the second quarter of 2022, a total of 218 communications service providers have already launched commercial 5G services, and 24 have launched 5G standalone networks, Ericsson reports. Nearly 70 million new 5G subscriptions were added globally in the second quarter of 2022 alone.</p>

<h2 class="wp-block-heading">Strengths and features of 5G networks</h2>

<p>Let’s take a look at the most notable strengths and technological capabilities that distinct 5G networks from the previous generations of cellular communication technologies.</p>

<ul class="wp-block-list">
<li><strong>Network reach</strong></li>
</ul>

<p>The signal of a 5G network node typically reaches up to around 500 meters without obstructions but degrades significantly if no clear line of sight is available. This is why the mobile service carriers will need to install small 5G cell transmitters to deliver a high-quality 5G signal across their networks.</p>

<ul class="wp-block-list">
<li><strong>High speed</strong></li>
</ul>

<p>The connection speed for devices in 5G networks will range between 50 Mbps and 1,000 Mbps (1 Gbit/s) on average. Speeds up to 4 Gbit/s will be reachable with MIMO-based equipment (MIMO stands for multiple-input and multiple-output; it’s a method for multiplying the capacity of a radio signal) in high-frequency mmWave bands. mmWave bands (also known as FR2) are found in the range of 24GHz to 40GHz.</p>

<ul class="wp-block-list">
<li><strong>Error rate</strong></li>
</ul>

<p>Extremely low block error rate (BLER) is one of the biggest advantages of 5G along with high connection speed. BLER is the ratio of the number of erroneous blocks to the total number of blocks transmitted on a digital circuit. Thanks to flexible adaptive MCS (Modulation Coding Scheme), the error rates in 5G networks can be kept extremely low.</p>

<ul class="wp-block-list">
<li><strong>Latency</strong></li>
</ul>

<p>Exceptionally low latency is another highly anticipated benefit of 5G networks compared to the previous generation of cellular technology. The latency in 5G networks should be in the 8–12 milliseconds range or even lower (as low as 5 milliseconds or less). This is a significant improvement compared to 4G networks with average latency between 60 and 100 milliseconds. Naturally, the latency will be higher during handovers (or handoffs), which is the process of transferring an ongoing call or data session from one channel to another.</p>

<ul class="wp-block-list">
<li><strong>Number of connected devices</strong></li>
</ul>

<p>Another advantage of 5G networks is the fact that each cell of a 5G network can accommodate a greater number of devices at the same time (up to over one million per each square kilometer). All devices in a 5G network will be connected to the Internet and able to exchange information with each other in real time.</p>

<h2 class="wp-block-heading">5G Antenna Design Challenges</h2>

<p>In the course of evolution of cellular networks from their first generation to the fifth, antenna technologies evolved as well. The antennas, originally external, became internal, multi-band, and multi-antenna, as well as multiple-input and multiple-output (MIMO). </p>

<p>The design of 5G antennas can be challenging in a number of ways. 5G antennas will be much smaller and send data at high frequencies, making the specific location where each individual antenna is placed much more important. </p>

<p>For the manufacturers of 5G antennas, it means that antenna arrays will be needed both on the mobile device and on the base station. The antennas would require more complex feeding and control circuits, as well as high-quality isolation between different antenna arrays. Additionally, the cellular network operators will need to implement new hardware platforms for quick automatic identification of the best locations for antenna placement and the control over interactions of antennas with the network hosting board. </p>

<p>All of this puts considerable pressure on the designers of 5G antennas and related 5G networks-supporting equipment based on embedded systems. </p>

<h2 class="wp-block-heading">5G networks and IoT</h2>

<p>All the advantages of 5G, such as high connection speeds, low latency, and large network capacity, will serve as a great foundation for the rapidly growing number of IoT networks populated by smart devices of all kinds.</p>

<p>Currently, the low capacity of the third and fourth-generation cellular networks is one of the main factors restraining the development of IoT and IIoT (Industrial Internet of Things) solutions. In order to maintain the functionality of large networks of interconnected smart devices, such as mobile gadgets, smart home equipment, smart vehicles, and other solutions, a cellular network needs to have high capacity and bandwidth along with lower latency.</p>

<p>With 5G connection, the concept of IoT networks of the future, where devices of all kinds, from smartwatches to refrigerators, are connected to the Internet and can communicate with each other simultaneously, becomes a reality.</p>

<h2 class="wp-block-heading">Applications for IoT solutions with 5G connectivity</h2>

<p>Empowered by the fifth-gen cellular network technology, <a href="https://taurotech.com/">professionally designed</a> embedded systems and IoT solutions will be able to reach a new level of effectiveness, with applications across multiple fields and industries.</p>

<p>Here are some examples:</p>

<ul class="wp-block-list">
<li><strong>Smart cities</strong></li>
</ul>

<p>A functional 5G network will be able to support a large-scale IoT network of smart city systems and electronic devices all connected to each other. Such as energy management systems, street lighting and traffic management solutions, emergency response, security surveillance, and many other components.</p>

<ul class="wp-block-list">
<li><strong>Autonomous driving</strong></li>
</ul>

<p>The connection to a high-speed low-latency cellular network will enable much more effective operations of autonomous vehicles as they will be able to communicate in real time and other smart devices around, including smart city infrastructure, connected traffic equipment and other surrounding objects with smart sensors in them.</p>

<ul class="wp-block-list">
<li><strong>Industrial IoT solutions</strong></li>
</ul>

<p>The proliferation of 5G connections will also provide a strong foundation for advanced industrial automation solutions. IIoT networks of the future will be able to provide centralized management and seamless connectivity for various kinds of industrial devices and machinery, from automated manufacturing equipment to predictive maintenance and logistics.</p>

<ul class="wp-block-list">
<li><strong>Logistics and warehousing</strong></li>
</ul>

<p>Another major application for 5G technologies is logistics and warehousing. Fast connection to a fifth-generation cellular network makes it much easier to establish an IoT system to track product delivery, monitor storage conditions (such as temperature, humidity, etc.), coordinate the delivery across all the layers of the logistics network, minimize theft, eliminate other security risks, automate reporting and implement multiple other solutions to improve efficiency and productivity of logistics and warehousing operations.</p>

<ul class="wp-block-list">
<li><strong>Smart home</strong></li>
</ul>

<p>5G networks will also be able to support complex and universally interconnected smart home systems of the future, with all consumer electronics, utility systems and building equipment centrally managed and orchestrated by an AI-based solution.</p>

<ul class="wp-block-list">
<li><strong>Surveillance and security</strong></li>
</ul>

<p>Low latency and error rate, along with other strengths of 5G, will be beneficial for security-related applications of IoT devices. This includes interconnected surveillance cameras with face recognition, smart locks, theft prevention systems, and other security equipment.</p>

<h2 class="wp-block-heading">5G applications beyond IoT</h2>

<p>Of course, 5G technologies will have multiple applications beyond just IoT across many fields and economic sectors. Here are some of the most important ones.</p>

<ul class="wp-block-list">
<li><strong>Broadband mobile Internet connections</strong></li>
</ul>

<p>5G technology will enable mobile carriers to maintain wireless networks supporting broadband mobile Internet connection at previously unreachable speeds.</p>

<ul class="wp-block-list">
<li><strong>Mobile access to HD content and entertainment</strong></li>
</ul>

<p>With these fast 5G connections, users can access all kinds of high-resolution multimedia content, from HD TV to video games, on their phones and other mobile devices.</p>

<ul class="wp-block-list">
<li><strong>VR (virtual reality) and AR (augmented reality)</strong></li>
</ul>

<p>5G connection speeds and low latency would also be a great technological foundation for the developers of VR and AR games, allowing them to deliver a new generation of VR/AR products, with much better gaming experience, more immersive and interactive.</p>

<ul class="wp-block-list">
<li><strong>Satellite Internet connections</strong></li>
</ul>

<p>With 5G network connections using satellite technology, broadband Internet will be available even in the most remote rural areas with no traditional ground-based cellular network stations to provide the signal.</p>

<h2 class="wp-block-heading">Summary</h2>

<p>Besides all the advantages and benefits that come with it, the rollout of 5G networks also brings us new challenges. IoT networks and embedded systems will become more complex and difficult to manage as they will include a much larger number of nodes and higher volumes of data streamed by connected devices. This means that the demands on the architecture and maintenance of such systems will be higher as well.</p>

<p>The Tauro Technologies&#8217; team of electronic engineers and designers has a proven track record of successfully designing custom hardware for various kinds of embedded systems and IoT products in multiple technology fields. Drawing on the specific needs of our clients, we select and apply various engineering methods to electronic product development and manufacturing in order to achieve the desired result. Utilizing our in-house IoT platforms assembly and debug expertise, we are able to build and evaluate your prototypes before high-volume manufacturing rapidly and cost-efficiently.</p>

<p>Interested to know more? <a href="https://taurotech.com/contact-us/" target="_blank" rel="noreferrer noopener">Get in touch</a> with us for details.</p>
								</div>
				</div>
					</div>
		</div>
					</div>
		</section>
				</div>
		<p>The post <a href="https://taurotech.com/blog/5g-rollout-and-iot/">5G Rollout and How It Will Empower the Future of IoT</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Serial Protocols &#038; Their Uses: I2C, UART, SPI</title>
		<link>https://taurotech.com/blog/serial-protocols-their-uses/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=serial-protocols-their-uses</link>
		
		<dc:creator><![CDATA[Paul Kuepfer]]></dc:creator>
		<pubDate>Mon, 01 Aug 2022 14:32:28 +0000</pubDate>
				<category><![CDATA[Hardware design]]></category>
		<category><![CDATA[AI Accelerators]]></category>
		<category><![CDATA[CPU vs GPU]]></category>
		<category><![CDATA[Edge Computing]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[GPU Technology]]></category>
		<category><![CDATA[Machine Learning Hardware]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2180</guid>

					<description><![CDATA[<p>Serial Protocols &#38; Their Uses: I2C, UART, SPI Serial communications protocols are vital to embedded systems.&#160; While UART, I2C, and SPI have been used for short-distance device communication for decades, the benefits are not entirely apparent. In order to connect peripherals to a computer, one of the following protocols is typically employed: a Universal Asynchronous&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/serial-protocols-their-uses/">Serial Protocols &#038; Their Uses: I2C, UART, SPI</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center">Serial Protocols &amp; Their Uses: I<sup>2</sup>C, UART, SPI</h1>



<p>Serial communications protocols are vital to embedded systems.&nbsp; While UART, I<sup>2</sup>C, and SPI have been used for short-distance device communication for decades, the benefits are not entirely apparent.</p>



<p>In order to connect peripherals to a computer, one of the following protocols is typically employed: a Universal Asynchronous Receiver Transmitter, (UART) Inter-Integrated Circuit, (I<sup>2</sup>C) or Serial Peripheral Interface (SPI). This blog will compare and contrast the features of each protocol and help you determine the best fit for your application.&nbsp;&nbsp;&nbsp;</p>



<h2 class="wp-block-heading"><strong>UART</strong></h2>



<p>Universal Asynchronous Receiver Transmitter (UART) is an asynchronous serial communication device with its roots dating back to the telegraph. There is no clock signal to synchronize or validate the data transmitted from the transmitter and received by the receiver (Asynchronous Serial Communication). It sends 1 bit at a time from least significant to most significant and uses start and stop bits to enable precise clocking. During packet transmission, UART uses what is called a parity bit, to enable checking if the information has changed during transmission.&nbsp;</p>



<p>In addition, data transmission between devices can be in simplex, half-duplex, or full duplex modes.</p>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh6.googleusercontent.com/V7yyyzUDYk-Hc-KHbapGhQmkKkEsoZXkjYxIH_7dDTLu7NLXMKf-LxKG6xeV7Aw_ZBvRsM4r2aGrTGSF3QScDBQkJvaJ-9CTHMV3GYIONSPz-NtGMupSpEbIi7gs9AI3uQM4C94PFL_qAWrqoFN_HA" alt="Technical diagram showing UART serial communication modes: simplex, full-duplex, and half-duplex between a transmitter and receiver"/><figcaption class="wp-element-caption">Figure 1: UART Modes of Operation</figcaption></figure>
</div>


<p>Data is transmitted at baud rate measured in bits per second &#8211; some of the standard baud rates are 4800 bps, 9600 bps, 19200 bps, 115200 bps etc. Out of these, 9600 bps baud rate is the most commonly used.</p>



<p>UARTs must be set for the same bit speed, character length, parity, and stop bits for proper operation on the transmit and receive side. If the receiving UART detects mismatched settings a flag is sent in the host system memory to indicate a failure.&nbsp;</p>



<p>The data in UART serial communication is organized into blocks called Packets or Frames. The structure of a typical UART data packet or the standard framing of the data is shown in the following table:</p>



<figure class="wp-block-table aligncenter is-style-regular"><table><thead><tr><th class="has-text-align-center" data-align="center">Frame</th><th>Start</th><th>Data</th><th>Parity</th><th class="has-text-align-left" data-align="left">Stop</th></tr></thead><tbody><tr><td class="has-text-align-center" data-align="center">Length</td><td>1 bit</td><td>5 to 9 bits</td><td>0 to 1 bits</td><td class="has-text-align-left" data-align="left">1 to 2 bits</td></tr></tbody></table><figcaption class="wp-element-caption">Table1: UART Packet Format</figcaption></figure>



<h3 class="wp-block-heading"><strong>Advantages</strong>:</h3>



<ul class="wp-block-list">
<li>Management is straightforward through hardware. It is utilized by standard protocols including RS-232/485/422.</li>



<li>Long-distance up to 1km for RS-422/485 buses.</li>



<li>Requires only two wires for full-duplex data transmission (other than power lines).</li>



<li>Parity bit ensures basic error checking is integrated into the data packet frame.</li>



<li>No need for clock or any other timing signal. </li>
</ul>



<h3 class="wp-block-heading"><strong>Disadvantages</strong>:</h3>



<ul class="wp-block-list">
<li>Communication is only between two devices where the baud rate, data bit count, parity bit, and stop bit count need to be identical.</li>



<li>Typically the size of the data frame is limited to only 9 bits (8 data bits, no parity bit and one stop bit).</li>



<li>Overrun errors if the buffer space is insufficient.</li>



<li>Size of data in the frame is limited.</li>
</ul>



<h2 class="wp-block-heading"><strong>I</strong><strong><sup>2</sup></strong><strong>C</strong></h2>



<p>Unlike UART, an Inter-Integrated Circuit is a synchronous serial communication interface and utilizes the system clock. It means that data bits are transferred one by one at regular intervals of time set by a SCL clock line. It is used primarily for short distance, intra-board communication between low speed controllers and processors and is ideal for applications that link up to many components on a bus. Although I<sup>2</sup>C is typically implemented with a single master and multiple slaves on the bus, it can also be implemented with multiple masters.&nbsp; Each slave device has a unique address and I<sup>2</sup>C enables the master to send and request data from a particular slave device utilizing a start bit to the slave address.&nbsp;&nbsp;</p>



<p>I2C only uses two wires to transmit data between devices:</p>



<ul class="wp-block-list">
<li>SDA (Serial Data) – The line for the master and slave to send and receive data.</li>



<li>SCL (Serial Clock) – The line that carries the clock signal (common clock signal between multiple masters and multiple slaves).</li>
</ul>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh6.googleusercontent.com/CDTNji7bIFD01notv_As80smLUoGeKRiZ6vINjUPD9bvI9eXtzhKHENvDn3oH7k1pJvza4aegDXk1scdRgBrr4u-JSBtya2QMz52R_hpoJKM8EYptqQYcjNZV1pjEenYjRfg04fgSpeYTViEeuElNA" alt="Technical diagram titled &quot;Figure 3: SPI Interconnect Diagram&quot; illustrating a Microcontroller connected to two Peripheral devices using the SPI protocol, featuring shared SCK, SDI, and SDO lines with independent Chip Select (/CS) signals for each peripheral."/><figcaption class="wp-element-caption">Figure 2: I2C Interconnect Diagram</figcaption></figure>
</div>


<p>The structure of a typical I<sup>2</sup>C Data Packet or the standard framing of the data is shown in the following table  (Note: The bold signals are sent by slave and the other signals by master):</p>



<figure class="wp-block-table aligncenter"><table><thead><tr><th>Frame</th><th>Start</th><th>Address</th><th>Read/Write&nbsp;</th><th><strong>ACK/NACK</strong></th><th>Data 1</th><th><strong>ACK/NACK</strong></th><th>Data 2</th><th><strong>ACK/NACK</strong></th><th>Stop</th></tr></thead><tbody><tr><td>Length</td><td></td><td>7 to 10 bits</td><td>1 bit</td><td><strong>1 bit</strong></td><td>8 bits</td><td><strong>1 bit</strong></td><td>8 bits</td><td><strong>1 bit</strong></td><td></td></tr></tbody></table><figcaption class="wp-element-caption">Table 2: I2C Packet Format</figcaption></figure>



<h3 class="wp-block-heading"><strong>Advantages</strong>:</h3>



<ul class="wp-block-list">
<li>Addressing function enables multiple masters and slaves.</li>



<li>Control a network of devices with only 2 I/O pins.</li>



<li>Simple mechanism for validation of data transfer.</li>



<li>I<sup>2</sup>C networks are easy to scale. New devices can simply be connected to the two common I<sup>2</sup>C bus lines.</li>



<li>No need for prior agreement on data transfer rate as in UART communication.</li>
</ul>



<h3 class="wp-block-heading"><strong>Disadvantages</strong>:</h3>



<ul class="wp-block-list">
<li>Slower speed (up to 100 kbit/s in standard mode, 400 kbit/s in fast mode)</li>



<li>Half-duplex interface.</li>



<li>Only one slave can be addressed at a time.</li>
</ul>



<h2 class="wp-block-heading"><strong>SPI</strong></h2>



<p>The Serial Peripheral Interface (SPI) is also a synchronous serial communication device which is used primarily for short distance communication. The main difference between SPI and I<sup>2</sup>C is that SPI uses a full-duplex communication with master-slave topology. Similar to I<sup>2</sup>C , SPI can be used to access multiple slave devices.</p>



<p>At the beginning of communication, the bus master configures the clock (typically 50 MHz) and sends data to the slave.&nbsp; During a single SPI clock cycle, a full duplex of data transmission is completed.&nbsp; Unlike UART, there are no start and stop bits &#8211; this enables continuous data transmission and the communication achieves higher speeds than I<sup>2</sup>C and UART.</p>



<p>The maximum data rate limit is not specified in the SPI interface. Standard data rates include 10 Mbps transfer rate with some devices reaching 100Mbps transfer rate.</p>



<p>The SPI bus consists of 4 signals below:</p>



<ul class="wp-block-list">
<li>Master – Out / Slave – In (MOSI)</li>



<li>Master – In / Slave – Out (MISO)</li>



<li>Serial Clock (SCLK)</li>



<li>Chip Select (CS) or Slave Select (SS)</li>
</ul>


<div class="wp-block-image">
<figure class="aligncenter"><img decoding="async" src="https://lh3.googleusercontent.com/R7tGQuq4qbD9_kuxCnN2oyKx3MNuYHc_TWUEoA-AeDzvpxKSgoP6TrjNqVAlQYGAbuDu1M1th7CTyH3R4kXbIafLN-8fiF47wW-mWDnSGMngSHW10XEiwVMQj6oASa6RZqY3-Om99SzwrMQvwqAmDg" alt="SPI Interconnect Diagram"/><figcaption class="wp-element-caption">Figure 3: SPI Interconnect Diagram</figcaption></figure>
</div>


<p>Depending on the values of Clock Polarity (CPOL) and Clock Phase (CPHA), there are 4 modes of operation of SPI:</p>



<ul class="wp-block-list">
<li>Mode 0 is active when Clock Polarity is LOW and Clock Phase is LOW  (CPOL = 0 and CPHA = 0). Data sampled on rising edge and shifted out on the falling edge.</li>



<li>Mode 1 is active when Clock Polarity is LOW and Clock Phase is HIGH  (CPOL = 0 and CPHA = 1). Data sampled on the falling edge and shifted out on the rising edge.</li>



<li>Mode 2 is active when Clock Polarity is HIGH and Clock Phase is LOW  (CPOL = 1 and CPHA = 0). Data sampled on the falling edge and shifted out on the rising edge.</li>



<li>Mode 3 is active when Clock Polarity is HIGH and Clock Phase is HIGH  (CPOL = 1 and CPHA = 1). Data sampled on the falling edge and shifted out on the rising edge.</li>
</ul>



<p>The structure of a typical SPI data packet or the standard framing of the data is shown in the following image:</p>


<div class="wp-block-image is-style-rounded">
<figure class="aligncenter"><img decoding="async" src="https://lh3.googleusercontent.com/0nB6dpatYFry3R5R9tTU7NGl9-2aG6iYsGMAjhj2SvR068PQLJHfTc1flQL_yw8ZfoMDpSHmWRSjR0tofEn1fbh8TVXb7Zr5Qx1DqdRHGPsBjP9KyDMU1lbcTQCgC6u8NV5LmbOYDiiID9cxdV2DcA" alt="Technical diagram titled &quot;Figure 4: SPI Packet Format&quot; showing the data exchange between an SPI Master and an SPI Slave using MOSI, MISO, SCK, and SEL lines, highlighting the shift register mechanism for transferring binary data."/><figcaption class="wp-element-caption">Figure 4: SPI Packet Format</figcaption></figure>
</div>


<h3 class="wp-block-heading"><strong>Advantages</strong>:</h3>



<ul class="wp-block-list">
<li>Full-duplex is default for the SPI protocol.</li>



<li>Slaves do not require a unique address.</li>



<li>Not limited to 8-bit word size.</li>



<li>Real-estate savings on embedded boards.</li>



<li>High data transfer speed.</li>



<li>No need for individual addresses for slaves as CS or SS chip-select lines are used.</li>



<li>Only one master device is supported, removing the possibility of conflicts.</li>



<li>SPI uses less power than I<sup>2</sup>C.</li>
</ul>



<h3 class="wp-block-heading"><strong>Disadvantages</strong>:</h3>



<ul class="wp-block-list">
<li>No protocol-level error checking function and no hardware slave acknowledgement.</li>



<li>Short distances (up to 10m).</li>



<li>Each additional slave requires an additional dedicated pin on the master for CS or SS.</li>



<li>There is no acknowledgement mechanism and hence there is no confirmation of data receipt.</li>



<li>Slowest device determines transfer speed.</li>
</ul>



<p></p>



<h2 class="wp-block-heading">Summary</h2>



<p>In general, you can use UART if you are looking for a simple connection between 2 devices, I<sup>2</sup>C if you are connecting several devices on the same bus, and SPI becomes the ideal choice if you require a faster interface.&nbsp; Whether you need UART&#8217;s tried and true operation, or want to utilize the expansion offered by I<sup>2</sup>C or the high speed of SPI, Tauro Technologies can implement a system using the most appropriate interface for your project.&nbsp;&nbsp;</p>



<p>Interested to know more?&nbsp;<a href="https://taurotech.com/contact-us/" target="_blank" rel="noreferrer noopener">Get in touch with us for details</a></p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/serial-protocols-their-uses/">Serial Protocols &#038; Their Uses: I2C, UART, SPI</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Introduction to Linux Device Tree</title>
		<link>https://taurotech.com/blog/linux-device-tree/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=linux-device-tree</link>
		
		<dc:creator><![CDATA[Paul Kuepfer]]></dc:creator>
		<pubDate>Wed, 06 Jul 2022 11:55:28 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[Projects]]></category>
		<category><![CDATA[ARM Development]]></category>
		<category><![CDATA[BSP Development]]></category>
		<category><![CDATA[Device Tree Compiler]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[Hardware Configuration]]></category>
		<category><![CDATA[Linux Device Tree]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2103</guid>

					<description><![CDATA[<p>Introduction to Linux Device Tree Most modern laptop or desktop computers have their peripheral devices (storage, media, or cameras) connected to the main processor through a peripheral bus such as PCIe or USB.&#160; Windows or Linux operating systems running on the computer can discover the connected peripherals through a process called enumeration or ‘plug and&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/linux-device-tree/">Introduction to Linux Device Tree</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center">Introduction to Linux Device Tree</h1>



<p>Most modern laptop or desktop computers have their peripheral devices (storage, media, or cameras) connected to the main processor through a peripheral bus such as PCIe or USB.&nbsp; Windows or Linux operating systems running on the computer can discover the connected peripherals through a process called enumeration or ‘plug and play’.&nbsp; This provides information about device type, manufacturer and device configuration thus enabling the OS to load the appropriate drivers for the device and making device operational.&nbsp;&nbsp;</p>



<p>However, in embedded systems, this is not the case as many peripherals are connected to the main processor using busses such as I2C, SPI, and UART which do not support enumeration.&nbsp;&nbsp;</p>



<p>To enable the system to recognize the peripheral devices in an embedded system, developers use a Linux Device Tree which is used to provide the hardware description for the operating system.&nbsp; Prior to using device tree, developers would compile the hardware description into the linux kernel and modify the kernel for each change in platform or peripheral device.&nbsp;&nbsp;</p>



<h2 class="wp-block-heading">What is a Device Tree?</h2>



<p>A Device Tree is a tree data structure with nodes that describe the devices in a system. Each node has property/value pairs that describe the characteristics of the device being represented. Each node has exactly one parent except for the root node, which has no parent.</p>



<p> In the Device Tree each node is named according to the following convention: <code>&lt;name&gt;[@&lt;unit-address&gt;]</code>.</p>



<ul class="wp-block-list">
<li><code>&lt;name&gt;</code>&nbsp;is a simple ASCII string and can be up to 31 characters in length. In general, nodes are named according to device type it represents. A node for a 3com Ethernet adapter would use the name <code>ethernet</code>, not&nbsp;<code>3com509</code>.</li>



<li><code>&lt;unit-address&gt;</code> component of the name is specific to the bus type on which the node resides. The <code>&lt;unit-address&gt;</code> must match the first address specified in the reg property of the node. If the node has no reg property, the <code>@&lt;unit-address&gt;</code> must be omitted and the <code>&lt;name&gt;</code> alone differentiates the node from other nodes under the same level in the tree hierarchy. In case <code>&lt;name&gt;</code>&nbsp;is used without <code>@&lt;unit-address&gt;</code>, the <code>&lt;name&gt;</code>&nbsp;shall be unique within the same level in the tree hierarchy.</li>
</ul>



<p>Figure 1 represents simple tree:</p>



<ul class="wp-block-list">
<li>The nodes with the name <code>cpu </code>are distinguished by their <code>unit-address</code> values of <code>0</code> and <code>1</code>.</li>



<li>The nodes with the name <code>ethernet </code>are distinguished by their unit-address values of <code>fe002000 </code>and <code>fe003000</code>.</li>
</ul>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="624" height="313" src="https://taurotech.com/wp-content/uploads/2022/07/Device-Tree.png" alt="A technical diagram titled &quot;Figure 1: Examples of Node Names,&quot; illustrating a hierarchical tree structure for a Device Tree (DT). It shows a root node &quot;/&quot; branching into subnodes including &quot;cpus&quot; (with child nodes &quot;cpu@0&quot; and &quot;cpu@1&quot;), &quot;memory@0&quot;, &quot;uart@fe001000&quot;, and two &quot;ethernet&quot; nodes with specific memory addresses." class="wp-image-2105"/><figcaption class="wp-element-caption">Figure 1: Examples of Node Names</figcaption></figure>
</div>


<p>A node in the Device Tree can be uniquely identified by specifying the full path from the root node, through all descendant nodes, to the desired node.</p>



<p>The convention for specifying a device path is: <code>/node-name-1/node-name-2/node-name-N</code>.</p>



<p>Each node in the device tree has properties that describe the characteristics of the node. Properties consist of a name and a value. A property value is an array of zero or more bytes that containz information associated with the property.</p>



<p>With the Linux Device Tree, the developers can create a single linux kernel image specific to a processor architecture and create multiple device tree images specific to a platform or a product.&nbsp; This makes it much easier to support and update the peripheral changes in various products and platforms that one desires to support.</p>



<p>In x86 based platforms, ACPI is commonly used to describe the hardware peripherals and it can be used with or without the Linux Device Tree.&nbsp; However, in non-x86 platforms such as ARM based systems, Linux Device Tree is becoming the common method of enumerating hardware peripherals.</p>



<h2 class="wp-block-heading"><strong>How is Device Tree data managed?</strong></h2>



<p>Data from the Linux Device Tree can be shown in multiple different ways. Usually, the device tree data is in a format that is readable to humans in <code>.dts</code> or <code>.dtsi </code>source files. The Linux kernel pre-processes the <code>dts</code> files before passing them to the device tree compiler.&nbsp; The source code of the device tree is compiled into a<code> .dtb</code> blob file in a binary format, this format is generally called a Flattened Device Tree (FDT). With this data, the Linux OS is able to find and identify devices in the system. The raw form of the FDT is accessed by the OS during very early stages of the system booting up, but is then further expanded into a kernel data form called the Expanded Device Tree (EDT) so that it can be accessed later during and after the booting up of the system more efficiently.&nbsp;</p>



<p>As of today, device tree support is enabled in linux kernel for Microblaze, Sparc, ARM, PowerPC and x86 architectures. To unify the handling of description of platforms in various kernel architectures, there is interest to extend device tree support to other platforms.</p>



<h2 class="wp-block-heading">Device Tree Advantages and Disadvantages:</h2>



<p>In summary, here are the advantages and disadvantages of Linux Device Tree.</p>



<h3 class="wp-block-heading">Advantages of the Linux Device Tree include:</h3>



<ol style="list-style-type:1" class="wp-block-list">
<li>It makes changing the configuration of parts of the system very simple without having to recompile any of the linux kernel source code.</li>



<li>Easier support for new/additional hardware.</li>



<li>Can reuse <code>.dts</code> files that are already existing within the system and can override old functionality.&nbsp;</li>



<li>It makes it easier to understand the descriptions of hardware peripherals.</li>
</ol>



<h3 class="wp-block-heading">However, disadvantages include:</h3>



<ol style="list-style-type:1" class="wp-block-list">
<li>Creation of <code>.dts</code> files require extensive knowledge of hardware and thus may not be easy to create.</li>



<li>Figuring out all the necessary syntax to match the intended system function may be difficult even if the user knows all the bus and device details.</li>
</ol>



<p></p>



<p>We trust this generic insight into Linux Device Trees is helpful for your system development.&nbsp;Tauro Technologies implements and customizes device trees as part of Board Support Package (BSP) development and board bring-up.&nbsp; </p>



<p>Interested to know more? <a href="https://taurotech.com/contact-us/" target="_blank" rel="noreferrer noopener">Get in touch with us for details</a></p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/linux-device-tree/">Introduction to Linux Device Tree</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>RISC-V vs ARM. Which One To Choose?</title>
		<link>https://taurotech.com/blog/risc-v-vs-arm/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=risc-v-vs-arm</link>
		
		<dc:creator><![CDATA[Paul Kuepfer]]></dc:creator>
		<pubDate>Tue, 07 Jun 2022 03:27:37 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[Hardware design]]></category>
		<category><![CDATA[ARM]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[firmware development]]></category>
		<category><![CDATA[hardware design]]></category>
		<category><![CDATA[RISC-V]]></category>
		<category><![CDATA[RTOS]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2040</guid>

					<description><![CDATA[<p>RISC-V vs ARM. Which One To Choose? For quite a while, since the rise of smartphones in the late 2000s, the computer processors market has been dominated by ARM central processing units (CPUs) based on the reduced instruction set computer (RISC) architecture. Recently, however, a strong competitor has emerged with a considerably different approach towards&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/risc-v-vs-arm/">RISC-V vs ARM. Which One To Choose?</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center">RISC-V vs ARM. Which One To Choose?</h1>



<p>For quite a while, since the rise of smartphones in the late 2000s, the computer processors market has been dominated by ARM central processing units (CPUs) based on the reduced instruction set computer (RISC) architecture. Recently, however, a strong competitor has emerged with a considerably different approach towards the CPU architecture in microprocessors, mobile systems and microcontrollers. The name of this potential ARM killer is RISC-V (pronounced as “risk-five”).&nbsp;</p>



<p>Over the last couple years, the debate regarding the competition between ARM and RISC-V has been getting more and more vibrant.&nbsp;</p>



<p>Will RISC-V ultimately replace ARM as the top CPU specification or will both technologies coexist? Let’s take a closer look at these two computer processor architectures, their technical specifications and how they are different from each other.&nbsp;</p>



<h2 class="wp-block-heading">What is ARM?&nbsp;</h2>



<p>ARM (originally known as Acorn RISC Machine, ARM stands for Advanced RISC Machines) is a family of RISC instruction set architectures for computer processors, available for various computing devices and environments.&nbsp;</p>



<p>The ARM CPU architecture is developed by the Arm Ltd company, which licenses the architectures to other companies, allowing them to design their own products that incorporate different components, including interfaces and memory.&nbsp;</p>



<p>There have been a number of generations of ARM architecture. The original version, ARM1, was introduced in 1985, almost 40 years ago. First application for ARM processors was as an additional second processor for the BBC Micro, providing support to speed up the simulation software. ARM1 used 32-bit internal structure but also had 26-bit address space, limiting it to 64 MB of main memory. This limitation was removed in ARM 3.</p>



<p>ARM 8-A, released in 2011, received the support for 64-bit address space and 64-bit arithmetic.&nbsp;</p>



<p>ARM processors quickly gained popularity due to their low power consumption, lower costs compared to available alternatives, and minimal heat generation.&nbsp;</p>



<p>Even though ARM CPUs were widely used since the initial release of this architecture, they really came to power in the late 2000s, upon the release of the first smartphones. Being the best CPU choice for portable devices due to light weight and low power consumption, ARM processors are preferred by the manufacturers of smartphones, tablets and laptops. For the same reasons, ARMs are also widely used in embedded systems.&nbsp;</p>



<p>According to the official data, more than 200 bln ARM chips have been produced around the world as of 2021.&nbsp;</p>



<h2 class="wp-block-heading">What is RISC?&nbsp;</h2>



<p>Since we already mentioned the RISC  a number of times, a few words about it need to be said as well.&nbsp;</p>



<p>RISC is a technology designed to simplify the individual instructions provided to the computer to perform certain tasks. The difference between RISC and CISC (a complex instruction set computer) is that RISC architecture typically requires more instructions provided to a computer in order to complete tasks as individual instructions in RISC are written in simpler code.&nbsp;</p>



<p>One of the key concepts of RISC computers is that every instruction performs only one function during single CPU cycle.&nbsp;</p>



<h2 class="wp-block-heading">What is RISC-V?</h2>



<p>RISC-V is basically the fifth generation of the RISC architecture, provided as an open standard instruction set architecture (ISA) based on the RISC standard principles. Unlike the majority of other ISA designs, it is provided under the open source license, so it’s free to use for all the computer chip producers.</p>



<p>The RISC-V specification defines both 32-bit and 64-bit address space options, and additionally includes a description of a 128-bit flat address space variant.&nbsp;</p>



<p>The RISC-V is a load–store architecture, using IEEE 754 floating-point instructions. RISC-V ISA also includes instruction bit field locations as a way to simplify the use of multiplexers in CPUs.&nbsp;</p>



<p>Started with a goal to create a practical open source ISA that will be easily deployable in various hardware and software designs, including embedded systems, the RISC-V ISA is a continuation of a long history of CPUs architecture design projects developed at the University of California, Berkeley, since the late 1980s.</p>



<h3 class="wp-block-heading">History of the RISC-V specification development</h3>



<p>The project to develop RISC-V specification was originally started in 2010 by the University of California experts with an intent to create a practicable instruction set that will be available for practical use in various CPUs manufacturing.&nbsp;</p>



<p>Dr. Krste Asanović, a professor of computer science at UC Berkeley, was an author of the project to develop RISC-V. Eventually, Dr David Patterson, another UC Berkeley professor and one of the creators of the original RISC chips back in the early 1990s, joined the project.</p>



<p>As any ISA needs to be stable for commercial use, the RISC-V Foundation was formed in 2015 with a goal to develop, maintain and publish the intellectual property related to the RISC-V specification. The original authors of the project at UC Berkeley have transferred all the rights to this non-profit corporation controlled by its members.</p>



<p>Currently, the RISC-V Foundation comprises over 325 members, including representatives from companies such as Google, NVIDIA, Microsemi, Western Digital. The RISC-V Foundation members participate in the development of the RISC-V ISA specification and related projects.&nbsp;</p>



<p>In 2019, due to the U.S. trade regulations concerns as the main reason, the RISC-V Foundation relocated to Switzerland. In 2020, the organization was renamed as <a href="https://riscv.org/">RISC-V International</a>, becoming a Switzerland-registered nonprofit business association.</p>



<p>Today, the RISC-V International publishes all the documentation and specifications related to RISC-V designs, which remains open source and available for everyone to use free of charge. However, only the members of RISC-V International can vote to approve any changes to RISC-V specifications.&nbsp;</p>



<h2 class="wp-block-heading">ARM vs RISC-V Comparison&nbsp;</h2>



<p>Here’s a table comparing technical specifications of ARM and RISC-V.&nbsp;</p>



<figure class="wp-block-table"><table><tbody><tr><td><strong>Features</strong></td><td><strong>ARM</strong></td><td><strong>RISC-V</strong></td></tr><tr><td><strong>Architecture</strong></td><td>Load-store</td><td>Load-store<br></td></tr><tr><td><strong>Memory Addressing</strong></td><td>64-bit Virtual</td><td>32 / 64-bit</td></tr><tr><td><strong>Architecture size&nbsp;</strong></td><td>64-bits</td><td>64-bits</td></tr><tr><td><strong>License</strong></td><td>Core / Architecture</td><td>Open source&nbsp;</td></tr><tr><td><strong>Instruction Set</strong></td><td>A64</td><td>None&nbsp;</td></tr><tr><td><strong>Instruction Set Width</strong></td><td>32-bit</td><td>32-bit</td></tr><tr><td><strong>Instruction Set Compression</strong></td><td>To 16-bit</td><td>To 16-bit</td></tr><tr><td><strong>Endianness</strong></td><td>Big</td><td>Little</td></tr><tr><td><strong>Max speed</strong></td><td>2.6GHz</td><td>3.0GHz</td></tr><tr><td><strong>Pipeline length</strong></td><td>12 stages&nbsp;</td><td>17 stages&nbsp;</td></tr><tr><td><strong>Integer Registers</strong></td><td>31</td><td>32 / 16</td></tr><tr><td><strong>FP / SIMD units&nbsp;</strong></td><td>2x 64 bits</td><td>2x 128 bits</td></tr><tr><td><strong>Vector Registers</strong></td><td>32</td><td>Add-On</td></tr><tr><td><strong>Multiplication</strong></td><td>Included</td><td>Add-On</td></tr></tbody></table><figcaption class="wp-element-caption">ARM vs RISC-V Architecture comparison</figcaption></figure>



<h2 class="wp-block-heading">Final thoughts. ARM vs RISC-V: Which one to choose?&nbsp;</h2>



<p>As you can probably tell from the comparison chart above, there is no simple answer to this question.&nbsp;</p>



<p>In many ways, right now, ARM-based CPUs are still a better option, mainly due to much longer lifecycle and the fact that ARM Ltd has invested billions of dollars into this specification over the years. ARM processors have a huge market share, being used in the majority of smartphones, as well as laptops and even PCs that are choosing ARM instead of x86 architecture-based designs.&nbsp;</p>



<p>We could say, however, that RISC-Vs are the future and a very strong contender to the throne of the most used computer processors architecture. RISC-V can provide better performance using a minimum amount of power. The fact that RISC-V is open source and free to use by any processor manufacturers is also a huge advantage.</p>



<p>Some manufacturers, such as Western Digital, for example, have already started implementing the RISC-Vs in their microcontrollers attached to RAMs and SSDs.&nbsp;</p>



<p>RISC-V is also getting increasingly popular in IoT devices and embedded systems of various kinds, due to its highly scalable nature. But it will undoubtedly take several years for industry players to transition to using RISC-V instead of ARM-based designs.&nbsp;</p>



<p>The Tauro Technologies&#8217; team of electronic engineers and designers has a proven track record of successfully designing custom hardware for various kinds of products in multiple technology fields. Drawing on the specific needs of our clients, we select and apply various engineering methods to electronic product development and manufacturing in order to achieve the desired result. Utilizing our in-house PCB assembly and debug expertise, we are able to build and evaluate your prototypes before high-volume manufacturing, rapidly and cost-efficiently.&nbsp;</p>



<p>Interested to know more? <a href="https://taurotech.com/contact-us/" target="_blank" rel="noreferrer noopener">Get in touch with us for details</a>.</p>
<p>The post <a href="https://taurotech.com/blog/risc-v-vs-arm/">RISC-V vs ARM. Which One To Choose?</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Six Key Embedded Systems Industry Trends in 2022</title>
		<link>https://taurotech.com/blog/six-key-embedded-systems-industry-trends-in-2022/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=six-key-embedded-systems-industry-trends-in-2022</link>
		
		<dc:creator><![CDATA[Paul Kuepfer]]></dc:creator>
		<pubDate>Mon, 02 May 2022 14:50:10 +0000</pubDate>
				<category><![CDATA[Embedded Systems]]></category>
		<category><![CDATA[5G]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Embedded systems]]></category>
		<category><![CDATA[trends]]></category>
		<guid isPermaLink="false">https://taurotech.com/?p=2011</guid>

					<description><![CDATA[<p>Six Key Embedded Systems Industry Trends in 2022 The demand for embedded systems across various industries and technology fields today is as high as ever before. Embedded systems are essential to many electronic devices and automated solutions that we are increasingly relying upon. So it comes as no surprise that the global embedded systems market&#8230;</p>
<p>The post <a href="https://taurotech.com/blog/six-key-embedded-systems-industry-trends-in-2022/">Six Key Embedded Systems Industry Trends in 2022</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h1 class="wp-block-heading has-text-align-center">Six Key Embedded Systems Industry Trends in 2022</h1>



<p>The demand for embedded systems across various industries and technology fields today is as high as ever before. Embedded systems are essential to many electronic devices and automated solutions that we are increasingly relying upon. So it comes as no surprise that the global embedded systems market is rapidly growing. According to a recent <a href="https://www.marketwatch.com/press-release/embedded-systems-market-trends-2022-growth-opportunities-top-leading-players-global-trends-industry-share-competitive-landscape-applications-analysis-and-forecast-to-2029-2022-02-16">study</a>, the total size of the embedded systems market is expected to reach $116.2bn by 2026 from $86.5bn last year, growing at a CAGR of 6.3% from 2021 to 2026.&nbsp;</p>



<p>Even the COVID pandemic and global economic turbulence caused by this healthcare crisis weren’t able to disrupt the consistent growth of the embedded systems market. As the authors of an older market study <a href="https://www.marketsandmarkets.com/Market-Reports/embedded-system-market-98154672.html">noted</a>, even though low demand for consumer electronic devices due to COVID lockdowns had its negative impact, it was balanced by the increasing need for various embedded hardware components for the healthcare industry.&nbsp;</p>



<h2 class="wp-block-heading">Six most interesting embedded systems industry trends in 2022&nbsp;</h2>



<p>The abundance of technological and market development trends is another sign that we have all the reasons to feel optimistic about the evolution of the embedded systems industry going forward.&nbsp;</p>



<p>&nbsp;Let’s take a closer look at some of the most interesting and noteworthy trends that, in our opinion, will influence the embedded systems industry in 2022 and over the next few years.&nbsp;</p>



<h3 class="wp-block-heading">Automotive industry driving embedded systems market growth</h3>



<p>When it comes to the applications of embedded systems, the automotive industry today is one of the main drivers of market growth. This trend will most likely further increase in 2022 as well, fueled by continuously rising demand for electric and hybrid vehicles across the globe. The manufacturers of electric and hybrid vehicles rely on embedded systems in a variety of smart electronic components such as advanced driver-assistance systems (ADAS), power control units, engine cooling systems, etc. Additionally, automotive and mobile robotics industries are also rapidly adopting autonomous technologies and further require the integration of LIDAR, camera, sensor and power subsystems. All these components rely on embedded systems for centralization and coordination on processes.&nbsp;</p>



<h3 class="wp-block-heading">Explosive demand for military embedded systems&nbsp;</h3>



<p>As you may know, embedded systems play a vitally important role in many devices and electronic machine components used for military applications. The demand for weaponry and advanced military equipment has already been on the rise in recent years as a result of escalating regional tensions and geopolitical rivalry around the globe. And we can expect the boost in development of the military embedded systems market as most NATO countries are significantly increasing their defense budgets. Military embedded systems are used in land, sea, and air warfare theaters for a large variety of applications, including unmanned vehicles, counter UAV systems, surveillance systems, weapons guidance systems, communication equipment, command and control solutions, satellite communications,  etc.&nbsp;</p>



<p>Following the evolution of military systems, we can clearly see that not just the commercial sector companies are looking to implement AI, 5G, cloud computing, and other technological innovations. Many defense contractors are also interested in tech innovations as a way to produce more advanced systems.&nbsp;</p>



<h3 class="wp-block-heading">Wider AI and ML integration&nbsp;</h3>



<p>Artificial intelligence (AI) has been one of the most significant technology trends in recent years. AI and ML (machine learning) solutions continue to gain momentum and spread across a variety of industries and market segments.&nbsp;</p>



<p>The embedded systems are not an exception, even though AI and ML solutions traditionally have been challenging to implement in embedded systems due to their hardware and framework limitations. But new hardware solutions along with innovative techniques used for inference processing, data curation and performance acceleration help to overcome these obstacles. In 2022, we expect to see even more new embedded implementations leveraging AI and ML technologies.&nbsp;</p>



<p>NVIDIA products are widely used for training and inferencing applications in many AI systems, and Tauro Technologies has been building these systems from their early days. As the industry evolves, other silicon and software solutions are emerging that promise to offer better price–performance ratio for many machine vision applications.</p>



<h3 class="wp-block-heading">Embedded security and defense against cyber threats&nbsp;</h3>



<p>Cyberattacks and information security breaches have been on the rise for a number of years now. And it’s not a secret that embedded systems are known to be vulnerable to hacker attacks and cybersecurity threats of various kinds. There are multiple reasons why embedded systems often fail to provide the appropriate level of protection against cyber threats: poor access control or authentication settings, no regular security updates, remote deployment, reliance on legacy hardware, etc.&nbsp;</p>



<p>This is why the development of embedded security software and hardware is on the rise in recent years, as well as the standards for the security level in embedded hardware designs. Specifically, we have noticed a rise of embedded systems that implement TPM, AES encryption, and FIPS 140 technologies on hardware platforms.</p>



<h3 class="wp-block-heading">&nbsp;5G technologies and 5G-based embedded systems&nbsp;</h3>



<p>The ongoing deployment of 5G infrastructure is expected to be a major growth driver for a variety of technology fields, mainly telecommunications, industrial automation, internet of things (IoT), automotive, etc. The demand for embedded systems based on 5G architecture will also be increasing along with overall 5G implementation progress. Growing communications and processing speed, achieved with 5G architecture, without a doubt will be very helpful to solve the performance issues typical for embedded systems based on communication standards of previous generations.</p>



<h3 class="wp-block-heading">Virtual and augmented reality with embedded systems</h3>



<p>Virtual reality (VR) and augmented reality (AR) is another major tech industry niche that has been trending for a while, keeps gaining momentum year after year, and received an additional boost thanks to COVID pandemic and increasing global turbulence overall. VR/AR solutions have a wide range of cost-saving and efficiency-improving applications. Modern-day feature-rich virtual environments cannot function without complex high-performance embedded systems. They allow VR/AR solutions to match movements of the user with rendering of graphics, sound and text in real time. We have seen early applications of VR/AR in skills development and training both for industrial and military purposes, which is why we expect the demand for such complex VR/AR embedded systems to increase in 2022 as well.&nbsp;</p>



<h2 class="wp-block-heading">Final thoughts&nbsp;</h2>



<p>Some of the other notable embedded systems industry trends that we didn&#8217;t mention in this article are the rapidly growing real-time segment of the market, rising popularity of Python as the main programming language for embedded systems software, related IoT development trends, and more.&nbsp;</p>



<p>What’s also worth mentioning, all six industry trends described above are connected and, in many ways, are fueling each other’s growth. For example, the demand for autonomous vehicles, robotics and AI technologies in commercial and military systems drives the demand for faster 5G communication that can enable the network speed required to fully implement these tech innovations.</p>



<p>Based on the foregoing, it is safe to say that the demand for embedded systems across market niches and applications will be on the rise at least for the next ten years or so. In this increasingly complex and competitive business environment, the importance of professional approach to IoT and embedded systems design starts to play an even more important role.&nbsp;</p>



<p>The Tauro Technologies&#8217; team of electronic engineers and designers has a proven track record of successfully designing custom hardware for various kinds of products in multiple technology fields. Drawing on the specific needs of our clients, we select and apply various engineering methods to electronic product development and manufacturing in order to achieve the desired result. Utilizing our in-house PCB assembly and debug expertise, we are able to build and evaluate your prototypes before high-volume manufacturing, rapidly and cost-efficiently.&nbsp;</p>



<p>Interested to know more? <a href="https://taurotech.com/contact-us/" target="_blank" rel="noreferrer noopener">Get in touch with us for details</a>.</p>



<p></p>
<p>The post <a href="https://taurotech.com/blog/six-key-embedded-systems-industry-trends-in-2022/">Six Key Embedded Systems Industry Trends in 2022</a> appeared first on <a href="https://taurotech.com">Tauro Technologies</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
