• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • [gtranslate]
  • Skip to main content

Chemical Market Reports

  • Home
  • All Reports
  • About Us
  • Contact Us

Technology

Samsung Galaxy Book3 series including Book3 Ultra surfaces

January 30, 2023 by Spencer Edward

 

Samsung’s Galaxy Book3 series is expected to be unveiled next week, along with the S23 series. Thanks to WinFuture, detailed specifications and the images of laptops have emerged online. The Galaxy Book3 series will include models such as the Galaxy Book3, Galaxy Book3 Pro, and Galaxy Book3 Ultra. There are also 2-in-1 models, the Galaxy Book3 360 and Galaxy Book3 Pro 360, both of which support the S-pen.

The laptops will all be outfitted with the most recent Intel 13th Gen Raptor Lake mobile processors, with the Ultra model including up to an Nvidia RTX 4070 GPU. These are expected to be revealed at the S23 Unpacked event on February 1.

Samsung Galaxy Book3 Ultra:

The Galaxy Book3 Ultra is the high-end model, with a 16-inch WQXGA+ AMOLED (2880 x 1800 pixels) display, a refresh rate of 120Hz, and a peak brightness of 500 nits. The 13th-generation H series processors with up to an i9-13900H configuration will power this. It will be come with up to 32GB of RAM and an Nvidia RTX 4070 GPU.

Samsung Galaxy Book3 Ultra Expected Specifications:

  • 16″ WQXGA+ (2880 x 1800 pixels) AMOLED screen, 16:10, 120Hz, 500 nits peak brightness
  • Intel Core i7-13700H or Intel Core i9-13900H CPU
  • Nvidia RTX 4050 or Nvidia RTX 4070 GPU
  • 16GB/32GB LPDDR5 RAM, 512GB/1TB Gen 4 M.2 PCIe NVMe SSD
  • Windows 11 Home
  •  2x Thunderbolt 4, 1x USB A 3.2 Gen 1, 2x USB-C 3.2 Gen 2, HDMI, MicroSDXC, 3,5mm headphone jack
  • FullHD webcam
  • AKG quad speakers, Dolby Atmos support
  • WiFi 6, Bluetooth 5.1
  • Dimensions: 1.65 x 35.5 x 25 cm Weight: 1.79Kg
  • 76Wh battery, up to 17.5 hours run time

Samsung Galaxy Book3 and Galaxy Book3 360:

The Galaxy Book3 is the entry-level model in the series, which will have a 15.6-inch FullHD IPS LCD screen with up to 300 nits of peak brightness. These will be equipped with 13th-generation Intel Core U series processors, up to the Core i7-1355U version. This does not include a dedicated graphic card and relies on integrated Iris XE graphics.

The 2-in-1 version of the Galaxy Book3, on the other hand, should come in two screen size options: 13.3 inches and 15.6 inches. The screen is an OLED one with FullHD resolution and touch and S-pen support. This will contain 13th-generation Intel Core P series processors with an Iris Xe GPU.

Samsung Galaxy Book3 and Galaxy Book3 360 Expected Specifications:

  • Galaxy Book3: 15.6″ FullHD (1920 x 1080 pixels) IPS LCD screen, 16:9, 60Hz, 300 nits peak brightness
  • Galaxy Book3 360: 13.3″/15.6″ FullHD (1920 x 1080 pixels) OLED screen, 16:9, 120Hz, HDR 500, 370 nits peak brightness
  • Galaxy Book3: Intel Core i3-1315U or Intel Core i5-1335U or Intel Core i7-1355U CPU
  • Galaxy Book3 360: Intel Core i5-1340P or Intel Core i7-1360P CPU
  • Intel Iris Xe graphics
  • 8GB/16GB LPDDR4X RAM, 256GB/512GB Gen 4 M.2 PCIe NVMe SSD
  • Windows 11 Home
  • 1x Thunderbolt 4, 1x USB A 3.2 Gen 1, 1x USB-C 3.2 Gen 2, HDMI, MicroSDXC, 3,5mm headphone jack
  • FullHD webcam
  • Dual speakers, Dolby Atmos support
  • WiFi 6, Bluetooth 5.1
  • Galaxy Book3 – Dimensions: 1.54 x 35.6 x 22.9 cm Weight: 2.44Kg
  • Galaxy Book3 360 – Dimensions: 1.29 x 30.4 x 20.2 cm (13.3″), Weight: 1.83Kg(13.3″); Dimensions: 1.37 x 35.5 x 22.8 cm (15.6″), Weight: 1.24Kg(15.6″)
  • Galaxy Book3: 54Wh battery
  • Galaxy Book3 360: 61.1Wh (13.3″), 68Wh (15.6″)

Samsung Galaxy Book3 Pro and Galaxy Book3 Pro 360:

The Galaxy Book3 Pro is expected to have a 14- or 16-inch WQXGA+ (2880×1800 pixel) OLED display with a refresh rate of 120Hz and a peak brightness of 400 nits. These will be equipped with 13th-generation Intel Core P series processors, up to and including the Core i7-1360P.

The Galaxy Book 3 Pro 360 is essentially a 2-in-1 variant of the 16-inch model with a folding display and touchscreen that supports the S Pen.
The device is nearly identical except for the stylus and touchscreen.

Samsung Galaxy Book3 Pro and Galaxy Book3 Pro 360 Expected Specifications:

  • Galaxy Book3 Pro: 14″/16″ WQXGA+ (2880×1800 pixel) OLED screen, 16:10, 120Hz, HDR 500, 400 nits peak brightness
  • Galaxy Book3 Pro 360: 16″ WQXGA+ (2880×1800 pixel) OLED screen, 16:10, 120Hz, HDR 500, 400 nits peak brightness
  • Intel Core i5-1340P or Intel Core i7-1360P CPU
  • Intel Iris Xe graphics
  • 16GB LPDDR5 RAM, 512GB Gen 4 M.2 PCIe NVMe SSD
  • Windows 11 Home
  • 2x Thunderbolt 4, 1x USB A 3.2 Gen 1, 2x USB-C 3.2 Gen 2, HDMI, MicroSDXC, 3,5mm headphone jack
  • FullHD webcam
  • AKG quad speakers, Dolby Atmos support
  • WiFi 6, Bluetooth 5.1
  • Galaxy Book3 Pro – Weight: 1.17Kg (14″), 1.56Kg (16″)
  • Galaxy Book3 Pro 360 – Weight: 1.8Kg (16″)
  • 63Wh (14″), 76Wh (16″)
Expected Pricing:

The Galaxy Book3 Ultra is expected to start at 2900 euros (US$ / Rs. 2,57,020 approx.) for the i7 variant while the i9 variant should cost around 3800 euros (US$ 4,131 / Rs. 3,36,785 approx.).  The pricing of the 14″ model of Galaxy Book3 Pro is expected to start at 1749 euros (US$ 1,901 / Rs. 1,55,009 approx.) while 16″ versions are said to around 1949 euros (US$ 2,119 / Rs. 1,72,735 approx.).

The expected prices of the entry level Galaxy Book3 models are unknown. We should know the official prices and some more details after the launch on February 1, 2023.

Source 1, 2

Filed Under: Technology

Apple could be working with an ambitious 16-inch iPad model

October 27, 2022 by Spencer Edward

Apple iPad: Apple is rumored to be planning larger iPads by the end of 2023, according to a new report.

Apple’s new iPads are bigger than its MacBooks. But, is the larger screen enough to accomplish the task? (Image Source: Apple)

Apple has launched new iPad model, including two Pro models and an updated iPad 10. Gen.

According to The Information, Apple is developing a new 16-inch iPad. It will also be the biggest iPad yet. The report states that the device will “further blur the line” between the iPads and MacBooks. The new large-screen iPad could be launched in Q4 2023. That would be approximately one year from now.

Bloomberg’s Mark Gurman also supports Apple’s larger iPad. Gurman previously reported Apple was working on a larger iPad. Display Analyst Ross Young said that the company is developing a 14.1-inch iPad. The 14.1-inch iPad will be bigger than the 16-inch version, but it will not be as big as the rumored 16-inch variant.

The blurring of the line between iPads and MacBooks

Apple has long been positioning iPads as more than tablets. They are also replacements for computer systems. This is especially true of the iPad model Pro series. Apple has provided a variety of storage options, accessories support, and processing power to make the iPad Pro series even more powerful.

Tablets are still behind laptops in screen size, which is why Apple’s MacBook series has its own MacBook series. An iPad with a larger screen requires creative work, as we see in official Apple launch videos. The possibility of tablets with a larger screen could change this.

Apple’s iPad Pros are powerful tablets that have been powered by the M-series chipsets. However, many have criticized them for their software bottlenecks in iPadOS. These software bottlenecks reportedly prevent the tablets from reaching their full potential as compared to the MacBooks. It will be interesting for Apple to target this area along with its larger displays.

It’s not yet clear where Apple will put the larger iPads within its rather confusing lineup. Buyers have plenty to choose between the iPad Air, iPad Pro, iPad mini, and vanilla iPads. More information about the new iPads should be available closer to their launch.

Filed Under: Technology

EV Sets Its Stage in India

September 6, 2022 by Spencer Edward

Warren Buffet’s Berkshire Hathaway invested US$232 million in China’s BYD when it was still just a start-up that began selling electric vehicles. BYD, is now worth US$7,500 million dollars and has overtaken Tesla as the world’s largest EV maker.

BYD sold 3.54 lakh electric vehicles between April and June, an increase of 266% yearly. Meanwhile, Tesla’s global sales grew 27%, to more than 2.54 million units. It also surpassed South Korea’s LG Energy, the second-largest EV battery manufacturer in the world.

The Shenzhen-based business is aggressively making its way into international markets, including India, where it will soon unveil its first-ever e-SUV. Deliveries are about to begin in January.

The passenger EV market was dominated by Tata Motors, the maker of Tata Tigor and the Nexon EV. With its ZS EV model, China’s SAIC-owned brand MG Motors held 11.5% of the passenger EV market.

BYD India’s senior VP: Sanjay Gopalakrishnan, claims that their technologies give them an edge. BYD India will sell premium EVs at first. Their goal is to reach 40% of the e-PV market by 2030. BYD India’s Sanjay Gopalakrishnan estimates that India could see 45,000-50,000 EVs sold this year. The initial 1.5% to 2% EV adoption will take time. The resale marketplace will see EV adoption increase.

India is a handful of countries supporting the global EV30@30 campaign. It aims to see at least 30% of new car sales go electric by 2030. BYD’s entry into India’s electric vehicle market signifies that global carmakers are increasingly interested in India. Hyundai India has launched the Kona EV to test the market after launching it in 2019, but now they are getting ready for its official launch.

In India, the South Korean firm is also developing a small electric car as part of its plan for six EVs by 2028. Hyundai’s sister-company Kia recently launched its first electric vehicle in India, the premium crossover SUV EV6. Sweden’s Volvo launched the compact SUV XC40 in India in July. In the next two years, Tata’s electric SUV, Curvv, is expected to hit the market. Meanwhile, MG Motor plans to launch an affordable mass market EV next year. Volkswagen plans to launch its first electric vehicle, the ID.4 SUV in India next year in limited quantities. Starting in 2024, Mahindra & Mahindra plans to launch five electric SUVs that can be used for domestic and international markets. Tata Motors intends to launch its pure electric vehicle, the Avinya in 2025.

Luxury car companies are not slowing down. Mercedes-Benz is the first company to assemble a luxury EV in India with three new electric cars. Audi India launched the e-tron, its maiden EV offering, last year. BMW India launched its electric models over six months – the iX SUV (mini hatchback), and the i4 sedan (semi-electric). Jaguar and its I-PACE have joined Porsche’s all-electric Taycan.

American EV start-up Fisker plans to launch two EVs India. Experts believe that electrification will take place in the luxury segment at a faster pace than that of the mass car segment.

At this current pace, EVs could represent as much as 1.4% in domestic passenger vehicle sales this fiscal year. This signals an increase of adoption. Tata Motors, MG Motors, and others will likely be the first to move.

While several are still laying foundations, Indians can expect a wide range EVs in varying price points over the next 2 years. Manufacturers of cars hope that by then the Indian EV passenger car market will be fully developed.

Filed Under: Technology

According to a New Study, Meta Injecting a Tracking Code into Websites to Track Its Users

August 16, 2022 by Spencer Edward

As per a new study conducted by an ex-Google engineer, Meta, the owner of Instagram and Facebook, has been recently working on rewriting websites its users visit, which let the company follow them across the web after clicking on links displayed on the app.

Both these apps have been into taking advantage of the fact that users who click on the links are directed to the webpages in an “in-app browser,” which is administered by Instagram or Facebook. Instead of directing the user to a browser of their selections like Firefox or Safari.

Felix Krause, a privacy researcher who founded an app development tool acquired by Google in 2017, said that the Instagram app added its tracking code to every other website shown, which also includes allowing the platform to control all user interactions, such as monitoring every tapped link and button, clicking on ads, screenshots, text selections, along with any input by a user, including credit card numbers, addresses, and even passwords.

According to a statement issued by Meta, an injection of a tracking code that follows preferences given by users on whether or not they granted permission to apps to follow users. Meta also stated that this tracking code was only aimed at accumulating data before it applied to targeted advertising or measurement purposes for mainly those users who selected the option of such tracking.

A spokesperson said that the only objective behind developing this tracking code is to respect its users’ preferences by asking them for consent to track. The code gives the platform permission to amass user data before the data is being used intended for targeted advertising. The platform does not add any pixels to it. The reason behind injecting code is to collect conversion events from pixels.

They further added that the platform asks the user for permission if the user made any purchase using the in-app browser to save information related to payment aiming to autofill.

Krause was the one who discovered the injection of a tracking code by building a tool that may contain a list of all additional commands that are added to a website by the browser. For most apps and standard browsers, the tool doesn’t find any changes, but on the other hand, if it comes to Instagram and Facebook, the tool successfully finds code of up to 18 lines by the app. Those code lines detected by the tool appear on the screen for scanning a specific cross-platform tracking kit, but if not installed on the device, then rather call the Meta pixel. Meta Pixel is a tool that grants permission to the platform to follow the user browsing around the web and create an exact profile of their preference.

As per Krause’s study, the platform does not reveal the way to the user, in which it rewrites web pages according to the users’ interests. No such tracking code is added to the in-app browser of WhatsApp.

” JavaScript injection” is classified as a type of malicious attack as it can be defined as a practice of adding extra code to a webpage, especially, before it is displayed to a user. Feroot, a cybersecurity company, refers to it as an attack that lets the threat actor control a web application or the website in order to steal personal data from the platform, which includes payment information or personally Identifiable Information (PII).

There is no evidence that Meta has injected its JavaScript aiming to gather such sensitive personal user data. It is still uncertain when Facebook started to use a tracking code in order to track users after they clicked on links.

Filed Under: Technology

To Add Some Oomph and Excitement To Lessons, Use Virtual Reality

August 3, 2022 by Spencer Edward

Virtual reality technology is an effective tool to improve classroom instruction. It is used more often in schools to improve each day’s lessons. West Baton Rouge Parish Schools, Louisiana has 4,100 students. VR offered teachers a unique opportunity to engage students.

“We used ESSER to buy several ClassVR headsets within our district. Teachers of all grades incorporate VR content in the classroom. I’m the leader of the district’s tech group. We frequently present on VR technology at conferences such ISTE, FETC and TCEA. Virtual reality technology is being enjoyed by students, which is the most important benefit. They can also have virtual experiences that allow them to connect with the curriculum at a deeper level.” Here are some examples.

VR Field Trips

“This is a request that teachers often make. VR allows students to see the Great Wall of China, Rome’s Colosseum, and many other places without ever leaving the classroom. Many of our students never venture out of their neighborhood. We are located just across the river to Baton Rouge. Many of our students have never visited this city.”

“Virtual reality technology allows students the opportunity to explore the globe and discover landmarks. VR allows students a wider view than their own street. It opens their eyes to the world around them. It causes them to wonder if they could ever travel to Africa or the Great Wall of China.”

The curriculum can be made richer by being connected to the lives of students.

Enhancing the Curriculum

This is one example we love. Third-grade teachers who use VR in their ELA lessons love it. Teachers can host “engagement days” to discuss specific topics, like space, immigration, and the ocean. Students are placed in small groups and assigned stations for activities. For viewing videos about the topic, VR headsets may be used.
Teachers uploaded a VR adventure where they could swim alongside sharks and cuttlefish to the unit about ocean. 360-degree NASA videos were used to create the space unit. Students went virtual to several monuments, including to the Statue of Liberty. VR stations were loved by students during engagement days.

Adding Joy

VR can be a lot of fun. We loved December as one of our favorite lessons. Kindergarteners were invited for a fieldtrip to Santa’s Workshop. VR headsets in class were used to allow students to ride in Santa’s sleigh while he delivers gifts. Teachers could use this lesson in addition to lessons about the North Pole.
It was a big success. It was instantly a hit. Spreading the word is the best way for teachers to use technology. This was a great case.

Virtual reality can be used to support lessons. Like any new technology, however, training and implementation are crucial. These suggestions are intended to assist districts who wish to implement virtual reality in schools.
Plan Professional Development. It Should Be Enjoyable

If they do not feel comfortable, teachers will not use technology. It is essential to explain technology in an engaging and fun way. Our tech team held a summer challenge that gamified their PD. Teachers could earn points if they tried new tech. A VR headset could be used by a teacher to show their students and to give feedback to the group. Oder they might share something via social media. They would earn points towards mugs and T-shirts.

“Gamifying PD can be a good idea, according to our experience. It makes the job much more fun and not seem as a burden. Next, look for your “tech evangelists”, the passionate teachers who love tech. Make sure they are using it. They are the ones that will encourage other teachers to join them.”

Use Tools That Are Easy to Use

It is counterproductive and distracts from the actual purpose of technology. Technology should be intuitive and easy to use. There is training available, as explained above. We offer training to teachers interested in using our tools. This includes Spheros, Lego kit, and ClassVR. The teacher can either have a student worker send them to aid them or work directly with us. Technology can only be used effectively if reliable products are available that can be quickly implemented.

The Pedagogy

Technology can help students connect to the books they’re reading and to the topics they’re researching. You can use technology to support everything from ESL instruction and specialist education. Technology is meant as a tool to assist instruction and not as a goal.

We help teachers select their teaching strategies, pedagogy, and methods. We also help teachers to find the right technology for their teaching methods. The first question we ask is: “What content are you teaching?” What are you teaching?” Next, we examine what technology can help them. Technology is a wonderful tool to enhance lessons and get students interested in the content. It doesn’t always have to make sense.

Technology can help students be inspired and engaged and bring the curriculum alive when done well. For schools and districts looking to adopt VR, or any other technology, the following considerations may be helpful.

Filed Under: Technology

For deceptive Australian phone marketing, Samsung was fined $9.8 million.

July 29, 2022 by Spencer Edward

On Thursday, a judge in Australia imposed a fine of (US$ 9.8 million) 14 million Australian dollars on Samsung for deceptive advertising regarding the water resistance of several smartphone models. Samsung Electronics Australia, a division of South Korean-based company Samsung Electronics Co., was given 30 days by Federal Court Justice Brendan Murphy to pay the fines.

Additionally, Samsung must contribute AUD 200,000 which is $140,000 toward the costs incurred by the Australian Competition and Consumer Commission, a consumer protection agency that opened an inquiry into the phones four years ago. Samsung’s acknowledged that it misrepresented the water resistance of seven different smartphone models of the Galaxy series in nine commercials between 2016 and 2018. The S7 Edge, S7, A5, A7, S8 Plus, S8, and Note 8 are among them. The penalties levied were likewise accepted by Samsung.

The deceptive advertisements highlighted the phones’ water resistance and appropriateness for usage in seawater and swimming pools. However, if the charging ports were used to recharge the phones while they were still wet, they might be harmed and cease to function. Samsung claimed that the seven models included in the complaint, which were released between 2016 and 2017, were the only ones affected by the charging port problem. According to a Samsung statement, “The issue does not emerge for Samsung’s latest phones.”

The court was unable to identify how many of the 3.1 million susceptible phones sold by Samsung in Australia had to charge port issues. Authorized Samsung’s repairers replaced the ports for an unidentified number of clients. The court heard that some repairers provided their services for free while others charged between US$ 126 and US$ 171.

Customers have a right to believe that a prominent firm like Samsung would not claim that its Galaxy phones could be immersed in water if they couldn’t, according to Murphy. According to Murphy’s assessment, “a large number of customers are likely to have seen the infringing commercials and a sizable portion of those who did so are likely to have purchased one of the Galaxy phones.”

According to the judge, Samsung’s attorneys first refuted that the advertisements were deceptive and that water immersion could harm the phones. Murphy said he didn’t think Samsung Australia deserved much credit for cooperating. More than 600 advertisements and 15 different Galaxy phone models were initially the subject of the commission’s inquiries, which Samsung claimed it had cooperated with. Samsung aims to provide every customer with the greatest experience possible, and we apologize that a small number of Galaxy users encountered a problem with their device related to this situation.

Samsung acknowledged that it misrepresented the water resistance of seven different Galaxy smartphone models in nine commercials between 2016 and 2018.

Filed Under: Technology

New Energy-Efficient Switches Make It Possible to Create Next-Generation Data Centres.

July 14, 2022 by Spencer Edward

Data centers are spaces that store, process, and disseminate data. They enable everything from video streaming to cloud computing. They also consume large amounts of energy to transfer data from one location to another. Data centers must be more efficient with increasing data demand.

High-powered servers are located in data centers. They communicate with each other via interconnects. These physical connections allow for the exchange and transfer of data. To decrease energy consumption within data centers, light can be used to connect information using electrically controlled optic switches that control the stream of light and communication between them. To support data center expansion, these optical switches must be multifunctional and efficient. A team of scientists from the University of Washington published a paper online on July 4th in Nature Nanotechnology. They described the creation of an efficient, silicon-based, non-volatile switch that manipulates sunlight through a phase change material and graphene heater.

Arka Majumdar is a UW professor in physics, electrical, and computer engineering. She is also a faculty member of the Institute for Molecular & Engineering Sciences and the UW Institute for and Nano-Engineered Systems. This technology will significantly reduce energy consumption in data centers for controlling photonic circuits compared to what is currently used. It will also make them more sustainable and eco-friendly. Because they are easy to make, silicon photonic switches are very popular. These switches were previously tuned by thermal effect. This is a process in which heat is applied to a material, often a semiconductor or metal, to change its optical properties and alter the path of light. This process is not efficient, and the effects it causes are not permanent. The current is removed, and the material returns to its original state. This means that the information flow and connection are broken.

Researchers have previously used doped silicon to heat phase-change materials. Although silicon alone cannot conduct electricity, it can be selectively doped with elements such as boron or phosphorus so that it can transmit electricity and light without excess absorption. A current can be pumped through doped silicon to change the phase-change material. This is not an energy-efficient process. The energy required to switch the phase-change materials is comparable to traditional thermo-optic switches. The doped silicon layer of 220 nanometers (nm thick) must be heated to convert only 10 nm phase-change material. Switching a smaller volume of phase-change material takes a lot of energy to heat such a large amount of silicon.

A thinner silicon film could be an option, but silicon won’t spread light well if it’s thinner than 200 nanometers. Instead, they used a 220 nm un-doped silicon layer to propagate light. They also added a layer of graphene between silicon and the phase-change material to conduct electric power. Graphene, which is similar to metal, is an excellent conductor. However, unlike metal, it’s atomically thin. It consists of a single layer made up of carbon atoms that are arranged in a honeycomb lattice. This design reduces waste energy as all graphene heat is directed to the change of the phase-change material. This setup’s switching energy density, which is calculated as the switching energy divided by the volume of the material to be switched, is only 8.7 attojoules, a 70-fold decrease compared with the widely-used doped silicon heaters. This is also within one-tenth of the fundamental limit for switching energy density (1.2 J/nm).

Although graphene conducts electricity and can cause some optical losses (meaning some light is absorbed), graphene is thin enough that the phase-change material and light propagating through the silicon layer can interact. A graphene-based heater was found to reliably change the state of the phase-change material for more than 1,000 cycles. This is an improvement on the doped silicon heaters, which only have a lifespan of 500 cycles.

Filed Under: Technology

Get out of the Internet, Get into the Metaverse

July 12, 2022 by Spencer Edward

Matthew Ball, a venture capitalist who first wrote about it in 2018, and his essays became essential reading for entrepreneurs or tech watchers trying to understand the network Mark Zuckerberg and other futurists are anticipating will outshine the internet. The ball is a former head strategy at Amazon Studios, and his first book, The Metaverse: It Will Revolutionize Everything, is out in July.

What does it mean?

It refers to a persistent network that consists of 3D space. Everything online, including web pages, digital operating systems, and applications, is connected by standard protocols.

Does the Metaverse not refer to virtual reality?

I believe it is essential to differentiate access devices and particular experiences from the entire Metaverse. It would be an excellent analogy to discuss that mobile apps are not mobile internet. It can be accessed via a web browser like if You can say, “Hey Siri. What’s the time?” Your phone will access the mobile internet.

What distinguishes it from real things?

It’s a good example. Second-Life highlights the fact that this idea is not new. I discuss that the term originated 30 years ago but has been used in theory and early literature for nearly a century. Second Life was one of the first and most successful examples of it. It featured a relatively independent economy in which users could conduct transactions among themselves without the need for intermediaries. It was designed to be unstructured, and there was no goal.

Are we to believe that the Metaverse will become anarchist utopias?

Although some believe this is the end of nation-state civilization and community, it is not my view. It’s more likely that the growing influence of regional actors and increased governmental regulation will result in a more assertive regional identity.

Almost all cases, a 3D immersive setting is more intuitive and productive than any other way to communicate information or ideas.

What issues need to be resolved before we can see something that approaches your vision?

We’re now at a stage where we don’t have any conventions. There isn’t English, USD, or metric, and there’s not even an intermodal shipping container. It is often impossible to share the virtual world with others. That is why it is essential to expand it.

What real-world issues does the Metaverse solve?

In many cases, if you don’t count nearly all, 3D immersive environments are more valuable and intuitive ways to communicate information. We know from education that YouTube videos do not make learning easy, and Zoom school is not very engaging. The immersive education model allows us to understand the potential advantages not by looking at a TV screen but by using a haptic sensor with 3D gait and gait analysis representations. We can conclude that certain elements will enhance our experience and have more impact than the current web. Technology is fundamentally irreversible.

Are there ethical concerns we should be pondering now, rather than after there have been a billion users?

One of the core objectives of my book is to give users, regulators, developers, consumers, and voters a better picture of what the future will look like so we can positively impact that outcome. Big tech is moving to the Metaverse as they know what happens during a platform shift. Philosophies that are supported also change. The future is nearing, and if they can see it, consumers will have the chance to choose who or how they want to lead. This intra-cycle is very complex. It is unlikely that many of us will switch to a new smartphone provider, a social networking company, or change the content networks we consume. But platform shifts offer us that possibility.

How confident do you feel that this is the future?

There are certain things that we can be sure of. I am confident that 3D simulations of the world will become more prevalent in construction and operate it. It is already used today to design and operate airports and cities. The One-esque version, in which we go to school, collect virtual currency, and have our favorite skin, is our future’s most essential and predictable aspect.

Although we may find that much of what I wrote is confirmed, we may refer to the internet in various ways. I’m also confident that virtual environments and virtual support environments will become a more significant part of our lives. That is the fundamental change. What that means for you daily, and what it means for you at 5 pm when your return from work is uncertain.

Filed Under: Technology

What does edge computing look like?

July 8, 2022 by Spencer Edward

Edge computing transforms the way data from billions of devices can be stored and processed. It was created with the aim of lowering bandwidth costs to move raw data from the source to an enterprise data center or the cloud.

Edge computing and the widespread use of 5G wireless protocols are related. Modern, low-latency use cases can be processed more quickly thanks to 5G.

What’s edge computing?

Gartner defines edge computing as “a component of a distributed topology computing where information processing occurs near the edge–where things and people produce or consume that information.” ”

Edge computing allows you to store and compute data in a more convenient way than ever before. Edge computing relies on central locations, sometimes thousands of miles away. It allows companies to save money by processing locally. This decreases the amount of data that must go to a cloud-based or central location. You can think of devices that monitor production equipment on a factory floor or that transmit live footage from remote offices.

Edge-computing hardware and services can solve this problem. These devices provide local processing and storage capabilities for many systems.

What is the relationship between edge computing and 5G?

Edge computing can also be used on networks that are not 5G like 4G LTE. The reverse is true. 5G can be used by companies without an edge computing infrastructure. “By itself, 5G reduces network latency from an endpoint to a mobile tower but it doesn’t address the distance between the data center and the endpoint, which could pose problems for latency-sensitive apps,” says Dave McCarthy, IDC research chief for edge strategy.

Mahadev Sayanarayanan, a professor at Carnegie Mellon University is also a co-author of a 2009 paper which established the foundations for edge computing.

Edge computing, 5G wireless and it will all continue to interact as more 5G networks become available. Edge computing infrastructure can be deployed using different network models (wired and Wi-Fi, if necessary). But, in rural areas, 5G networks are more common, so companies may still use a 5G network.

How is edge computing implemented?

While the physical architecture of edge modules may seem complicated, it’s the idea behind it that clients connect to edge modules nearby for faster processing. Edge devices could include IoT sensors like an employee’s laptop, their latest smartphone, or security cameras.

An industrial edge device can be an autonomous mobile robot or a robot arm in an automobile factory. The terminology of edge servers and edge gateways can differ. Service providers will deploy many edge gateways and servers in order to support edge networks. Verizon, for example, has 5G network. However, enterprises that want to build a private network need to consider this hardware.

 

How do you purchase and deploy edge computing systems?

There are many options for how an edge system could be bought and used. This would involve selecting the right hardware from vendors such as HPE or IBM and designing a network that meets all the requirements.

Although this is a large undertaking that will require IT expertise, it can still be appealing for large companies who want an entirely customized edge deployment. Vendors who specialize in certain verticals are more skilled at marketing edge services. Organizations can request vendors to install their hardware, software, networking, and other necessary components. The vendors will charge a monthly fee to maintain and use the equipment. This includes IIoT offerings from companies like GE and Siemens. This approach is fairly straightforward and easy to deploy, but it might not work for all uses.

Which examples of edge computing are there?

Edge computing is a way to save money and provide the benefits of low latency.

Verizon Business discusses several edge scenarios. These include the use of a 5G edge network to create popup network ecosystems that change the way live content streams are delivered with sub-second latency. Edge-enabled sensors are able to capture detailed images of crowds in public places to improve safety and health. Edge-enabled sensors are also used to precisely model product quality by using digital twin technology. This allows for insight into manufacturing processes.

Different deployments will require different hardware. To operate in harsh environments such as a factory floor, they will require ruggedized edge nodes.

Connected agriculture users will still require a rugged edge device to deploy outdoors. This connectivity piece may look quite different. It may still be necessary to coordinate heavy equipment movement using low latency. However, environmental sensors have more data and range requirements. A Sigfox or similar LP-WAN connection could be the best choice.

Other use cases pose different challenges. Retailers can use edge nodes as an in-store clearinghouse for a variety of functionality. These nodes tie point-of-sale data to targeted promotions and track foot traffic for an integrated store management system.

This component of connectivity could be as simple or complex as Wi-Fi in the home for all devices, or more complicated with Bluetooth and other low-power connectivity for traffic tracking or promotional services. Wi-Fi can only be used for self-checkout or point-of-sale.

What are some of the benefits of edge computing?

Savings can be a motivator to use edge computing. May be an option.

Edge computing has the greatest advantage of being able to store and process data more quickly. This allows companies to create more efficient real-time applications. To run facial recognition algorithms, a smartphone would need to use a cloud-based platform. It would be slow and time-consuming, and it would also require a lot more effort.

Applications such as virtual and augmented realities, self-driving cars, smart cities and building-automation systems require this level of processing and response.

Privacy and security concerns

Security risks can be present when data is at the edge. Security risks can be present if data is being managed from the edge.

There are many requirements for edge devices, including electricity, processing power, and network connectivity. This can affect their reliability. This ensures data is processed and delivered correctly even if one node fails.

Filed Under: Technology

Government AI Adoption is Being Led by Procurement Officials

July 5, 2022 by Spencer Edward

In 2001: A Space Odyssey (sci-fi movie), 50 years ago, the crew of the spacecraft included HAL 9000, a computer that was a malevolent AI system. It was motivated by its survival and aimed at murdering any human crew members who doubted it.

This cautionary tale illustrates the danger of AI being unintended or extreme when it comes to augmenting human capabilities. How can the government stop AI from acting against human expectations and intentions? What can be done to reduce potentially undesirable behavior through the procurement process?

This potential is being mitigated by government procurement processes that use new specialized principles and methods that encourage the responsible use of AI. This mission-critical journey is full of learning.

The private sector and academia have been the pioneers in the development of AI technologies. Agencies will be able to achieve better outcomes at scale if AI is adopted by the government.

AI can be used to enhance human performance. This can lead to efficiency gains at unattainable levels and order of magnitudes better than the status quo.

However, putting private-sector AI technology to good use can pose challenges for public procurement, Cary Coglianese & Erik Lampman noted in their work on contracting and AI governance. These experts point out that AI’s potential game-changing benefits must be balanced against the potential risks.

Coglianese and Alicia Lai write an article in which they argue that there is no perfect or unbiased AI system against which to compare. Human designers, decision-makers and government agents bring decades of experience, often underappreciated, to their work. AI cannot eliminate these biases if AI is not used to remove them. Recurrent training can also introduce algorithmic bias through training data choices that are embedded in algorithms. This could lead to human prejudices being unable to be seen.

The government must improve the way it applies technology to reap the benefits of AI without creating problems. There are many issues at stake, including cybersecurity, equity diversity, inclusion, and adaptation to climate change. If entrepreneurs and government contracting officers are overburdened, innovation in the use of technology for these important use cases will be discouraged.

The federal acquisition system is a collection of many rules and regulations, which are then interpreted by the agencies and their professional contracting officials. Federal Acquisition Regulation (FAR), along with its derivatives, is the sheet music for a bureaucratic choir and orchestra in which the contracting office is the conductor. Government procurement regulations are complex. It was created to ensure public confidence in the system’s fairness. The FAR has mostly met that goal, with hidden and unobservable opportunity costs in terms of performance forgone.

The FAR Part 1 states that contracting officers should employ good business judgment on behalf of taxpayers. However, in practice, Part 1 discretion can be overwhelmed by cultural norms for complying with complex regulations.

Many contracting officers are aware that regulations can be a barrier and discourage companies from embracing technology innovation and collaborating with the government. This recognition has resulted in a flood of procurement innovation by contracting officers to take advantage of new market opportunities that are emerging in the changing landscape.

Sensing a similar need in the United States, Congress extended the 60-year-old Other Transaction Authority. This was created to abolish FAR rules, and allow experimentation with new technologies, such as AI. In recent years, OTAs have been used in a large number of cases, most notably in the U.S. Department of Defense.

These authorities are essential for advancing the art of procuring AI through the Defense Department’s Joint Artificial Intelligence Center. These authorities require more experience from contracting officers. They should use the right business acumen but not the rules embedded in the FAR when creating OTAs.

The JAIC has created Tradewind, an AI contracting “golf course“, where the tees can include the business acumen and freedom of OTAs. Tradewind can be used by the federal government across all levels of government to facilitate faster and more efficient AI acquisition.

Responsible AI (RAI), a collection of AI-specific principles that form part of the JAIC’s enterprisewide AI initiative, is a new set of principles. The commitment of the Defense Department to RAI begins with the top leadership of each department.

The Department of Defense’s new Chief Digital and AI Office is the central point of execution of its AI strategy. RAI principles guide the development of and accelerate the adoption of AI through innovative acquisition approaches. This new acquisition pathway is based on OTA and related authorities. It also includes an infrastructure of contract vehicle vehicles, such as test and evaluation support, described by the Defense Innovation Unit. Contracts based upon challenge statements can be completed in as little as 30-60 days. This allows you to quickly develop and capitalize on new techniques.

However, most important civilian agency missions revolve around allocating resources.

For example, U.S. Department of Health and Human Services missions must prevent illegal socioeconomic biases. The National Institute of Standards and Technology (NIST), which guides procurement teams to avoid such bias, is remarkable in its approach to data, testing and evaluation as well as human factors. NIST analyzes prospective standards to identify and prevent socioeconomic biases in the deployment of AI solutions.

Filed Under: Technology

What does RCS messaging mean? All the information you need about the SMS successor

July 4, 2022 by Spencer Edward

Technology that we used to communicate in the 1990s is aging. They don’t allow encryption or group messages. They don’t support encryption and group messaging.

There are some people who need SMS messaging more than the service can provide, despite it being popular. Rich Communication Services, also known as RCS Chat, was developed by the operators of smartphones and regulators in the mobile industry. This is a modern form of texting. It combines the features of WhatsApp, iMessage, Facebook Messenger and other messaging apps into one platform. It allows you to view and execute functions such as iMessage, or other rich messaging applications.

Use RCS

Google offers RCS Chat globally via Android Messages. It is available to all Android users who have the application installed. This partnership was made possible thanks to Google and Samsung.

The background of text messaging

Text messaging existed before the iPhone, BlackBerry, Palm Pilot, and Palm Pilot. The first proposal for SMS was made in 1982 for Global System for Mobile Communications.

The framework was created to transmit text messages using the signaling systems controlling telephone traffic. ETSI designers designed it to be compatible with existing signaling paths (128 bytes was later increased to 160 seven-bit character characters), but modular enough to permit carrier management features, such as real-time billing, message routing (routing messages towards another recipient than one user specifies), and message blocking.

After nearly a decade spent tinkering with SMS, it was finally commercially deployed in December 1992. Neil Papworth was the engineer who sent Richard Jarvis Merry Christmas in 1992.

Despite SMS’s rapid growth, technology hasn’t changed much over the past 20 years. Despite advances in phone forms, and the popularity of the touchscreen iPhone, SMS remains the same at 160 characters.

What is RCS?

Rich Communication Services (RCS), originally promoted as a substitute for SMS, was launched slowly. It was later brought into the GSM Association, which is a trade organization. The association remained there for almost a decade. It uses RCS Universal Profile, a global standard for RCS implementation. It allows subscribers of different countries and carriers to communicate with each other.

Chat is visually identical to iMessage and commercial messaging applications like iMessage. But there are some nice extras. These include brand informational messaging and sharing content, such as images, video clips, or GIFs. Customers can also be updated about upcoming flights and boarding passes. And eventually, customers will be able to choose airline seats directly from Android Messages. Chat works on all devices and is hardware-independent. Chat could be used on iOS too, but Apple, which represents half of the U.S. smartphone market and 70% of U.S. cell phone owners aged 18-24, does not support the protocol, despite growing pressure from Google.

Chat is a protocol

Chat isn’t a messaging service per-se. It is however the friendly name for the RCS protocol and RCS Universal Profile. There are many parts to this process. First, your device must support chat and a messaging application. Second, your recipients must also be able to use Chat. Otherwise, chat messages will be reverted back into SMS.

RCS brings Android messages to the 21st century with reading receipts. It also allows people to chat over Wi-Fi or mobile data, create group chats, and add or delete members.

Chat – Who is it for?

RCS Messaging boasts 60 supporters. The RCS Messaging community includes approximately 60 supporters, including 47 mobile operators and 11 OEMs. RCS requires both new software updates and a brand new network. Many manufacturers didn’t want to create software that would retroactively support RCS. Google now provides Chat for customers via its app. This eliminates the need to have support from carriers. Microsoft also has committed to the protocol. It is now supported by all US carriers making it easy to implement the standard in mobile virtual networks.

Cross-carrier initiative blues

Cross Carrier Messaging Initiative was formed by the four largest U.S. carriers, Verizon, AT&T, and Sprint in 2019. This joint venture was created to standardize RCS independent of Google. T-Mobile, AT&T, Verizon and AT&T abandoned the joint venture. But they are committed to improving customer communication and increasing the availability of RCS. T-Mobile, Verizon and others kept their hats in the ring. In March 2021, T-Mobile signed a contract to make Messages the default messaging app on all Android phones.

Where is it now?

RCS can be added to your data plan and is available. But, you have to opt-in via Google and your provider. This is RCS.

Android users can use the RCS standard to share high-quality photos and videos. Google has now integrated its Photos app into Messages. This allows users to send high-quality videos and photos as Google Photos links in an RCS conversation. Soon, you can also send photos this way.

While the original RCS protocol could only allow client-to-server encryption, chat in Google Messages now supports encrypted chat. RCS doesn’t support tablets, laptops, or desktops.

Many telecom companies worldwide have signed up for RCS universal profile model. Vodafone, Orange, Deutsche Telekom, and U.K. telecom companies have all fully implemented RCS within these countries.

Is RCS to become the default standard for SMS and It is difficult to believe given the slow progress in the last decade, particularly if Apple doesn’t take the torch?

Filed Under: Technology

Different Ways to Use Google Voice that You May Not Know About

June 28, 2022 by Spencer Edward

Google Voice delivers a number that you can use to make calls, send text messages, or voicemails. This number can be used to make international & domestic calls via your web browser or mobile device. Google Voice, which provides a U.S. telephone number, is available to both American Google Account customers and Google Workspace consumers from Canada or Denmark. This is a cloud-based service, replaces your phone number. Your number does not belong to any particular phone or SIM card. It is stored on a Google server. It can be controlled completely with Google software. Although it might seem odd, this arrangement allows you to connect in many efficient ways. It doesn’t matter if you have a smartphone, tablet or desktop, you can make and receive calls. The difference will not be noticed by anyone listening to the conversation.

A person can send & receive SMS messages via the Voice website /apps, from any device at any time. Multiple connected devices are also eligible. This means that any device, tablet or phone you sign in to becomes “your phone”, irrespective if it has active cellular or not.

For eg., a user can:

An older Android phone can be added to the Google Voice app. It can be answered whenever you answer, and it can make outgoing calls. You can send and receive texts to your regular number, as long as you are connected via Wi-Fi. You can download the Google Voice app for your Android tablet or Chromebook to communicate with customers and co-workers. Google Voice is accessible from any computer. This can be accessed from any computer. You can also make calls & send messages to Google Voice. inevitably transcribes voicemails & you can listen & read them on any device you have signed in to. Google Voice can screen all of your calls, voicemails and text messages. You can also use it to forward calls contextually – similar to Google mail filters. This powerful but complex service is available. Let’s dive in and learn what all the pieces mean to make sure you get the most out of them. Sign in to Google Voice to get started. Next, create an account. It is much easier to do this if you are working on a computer. With individual, non-company-connected Google accounts, you can immediately choose a new Google Voice number in any available area code for free, or you can opt to pay $20 to hand over an existing No. into the service. You must be located in the United States to be eligible for either one of these options.

Google Workspace accounts that are connected to a company in the United States, Canada, Belgium, and Denmark can access Voice. It is also available in France and Germany. Ireland. Portugal. Spain. Sweden. Switzerland. The service must be activated by your Workspace administrator. The cost is $10-20 per user per month and $30-30 per year. The tier will determine how much the company is charged. No matter what your preference, once your number is set up, it will be possible to access the main Google Voice dashboard. Here you can see your recent calls and messages, as well as make new phone calls. You can also listen to and read voicemails left on your phone number.

Filed Under: Technology

The Hubble Space Telescope Has Discovered a Beautiful ‘Hidden Galaxy’ Hidden Beyond Our Milky Way.

June 9, 2022 by Spencer Edward

IC 342, Alias Caldwell 5 is a spiral galaxy located Eleven Million Light-years from Earth. Hubble captured a stunning, front-on view at the core of the galaxy. It features entangled tendrils made of dust wrapped around a glowing core of stars, hot gas, and stars.

Few struggles had been there due to obstacles to observing The Spiral Galaxy IC342 NASA stated in a May 11th press release that “it appears near the equator” of the Milky Way’s pearly disc, which is dense with cosmic gas, dark matter, and glowing stars, making it difficult to see. Hubble is capable of seeing concentratedly debris to a range as the telescope has infrared abilities. Infrared light, which is less intense and dispersed by dirt, allows for a finer view of the galaxy beyond the matter of interstellar

Nasa said on the image, “This stunning, face-on view shows the galaxy’s center with intertwined tendrils and dust that wrap around a brilliant Core of hot Gas and Stars.” This is a specific type of region that Nasa has labeled an H II nucleus, an area of atomic hydrocarbon that has been ionized. These regions can be energetic birthplaces for stars, where thousands of stars could form in a matter of a few million years.

Nasa Said that because it emits ultraviolet, the Blue star ionizes and energizes hydrogen in its birthplace. The galaxy would shine brightest if there weren’t too much dust. IC 342 is far too close, only 11 million light-years from Earth. It is roughly partly in the size and mass of our Milky Way. it is also quite massive.

Filed Under: Technology

Quantum computers could revolutionize technology in the world

May 26, 2022 by Spencer Edward

Supercomputers are used by engineers and scientists to solve difficult problems. These supercomputers are large, classical computers that often have thousands of CPU and GPU cores. But even supercomputers have difficulty solving certain types of issues. Quantum computing, a rapidly emerging technology, uses the laws of quantum mechanics to solve complex problems that are too difficult for classical computers.

Complexity is often the reason classical computers fail. A supercomputer will often get stumped because it gives a complex problem. Complex problems involve issues that have many variables interconnected in complex ways. Because of the interaction of many electrons, modeling the behavior of individual molecules is difficult. It is also challenging to determine the best routes for a few hundred vessels in a global shipping system.

The best way to understand a quantum computing system, other than spending a lot of time at Caltech or MIT, is to compare it with the computer. The machine’s computing power comes from its sheer number of transistors. A 120.5 sq. meter can hold sixteen billion transistors. It’s a lot. It had fewer than 800 when it first became transistorized. The ability of the semiconductor industry to engineer more transistors onto a single chip has been what has allowed the exponential growth in computing power.

However, there are certain things that classic computers won’t be able to do. This is where quantum computers’ unique and bizarre properties come in.

Quantum computers use qubits to process information instead of bits. Qubits can be both “0” or “1” at once. How does that happen? I’m not sure how to explain it. Still, qubits use the quantum mechanical phenomenon called “superposition,” where subatomic particles’ properties cannot be determined until they are measured. You can think of Schrodinger as a cat, simultaneously alive until it opens.

Quantum computers could simulate the properties of a hypothetical battery to design one that’s more efficient and powerful than the current versions. They could solve logistical issues, find optimal delivery routes, and improve climate science forecasts.

Quantum computers can break cryptography, potentially making everything insecure, including emails and financial data. This is why the race to be the world’s greatest quantum computer is an international competition. The Chinese government has invested billions of dollars. These concerns prompted the White House to issue a new memorandum earlier this month to build national leadership in quantum computing and prepare the country to deal with quantum-assisted cybersecurity threats.

Filed Under: Technology

Nest cameras for Amazon Alexa collaboration includes since the new versions of Google.

May 17, 2022 by Spencer Edward

The newest Google Nest cameras might now stream to individuals Amazon Alexa smart displays, as well as Fire TVs, Echo Show devices, and Fire tablets, which is prodigious news for smart homes divided. It has modernized its Alexa ability to incorporate its newest cameras, after Amazon’s declaration earlier week that third-party cameras may use its new package and person proclamation capabilities on Echo smart speakers.

The Google Nest Alexa skill proposals the aptitude to monitor a live stream from the Google Nest Cam (indoor/outdoor, battery), Google Nest Cam (indoor, wired), Nest Cam with floodlight on Echo speakers, also motion announcements. Google’s Nest Doorbell integration now includes doorbell press notifications and two-way communication (battery).

This is in addition to the existing capability for live views from older Nest Cams, the Nest Doorbell (wired), and Nest Cam IQs (outdoor/indoor) which were previously known as Nest Hello. According to Amazon, the new human detection announcement capability will come to the new cameras at some point, but package detection announcements are not currently planned.

Users can now stream all of their own Echo Show to a Nest camera and view a live feed of the newer models on a Fire tablet or Fire TV. Users can also get motion notifications on their own smart displays, and Echo speakers and use an Echo Show to see and chat with guests at the Nest Doorbell.

Operators can get doorbell push notifications and two-way conversation on the Nest Doorbell, allowing them to use an Echo Show as an intercom for their doorbell. Of course, if operators have a Google Nest display or a Ring Doorbell with an Echo Show device, operators have been able to do this for a time. If the doorbell is pressed, both of these systems can instantly bring up the footage on the operator’s smart display. For the cross-platform integration, this does not appear to be the case.

“Alexa, show the [camera name] stream” and “Alexa, answer the front door” are two of the new skill’s voice commands. The talent also includes prior Google Nest skills that allowed users to operate their Nest Thermostats (all models) and older Nest cameras. The skill will only work if the user has switched the user’s Nest account to a Google Account if the user has older Nest devices.

All of this smart home harmony is encouraging, and it’s likely a forerunner to the upcoming smart home standard matter, which has the potential to bring all of our devices together so that users can control anything with any voice assistant or app user want. While cameras were not included in the first revision of the Matter specification, it’s encouraging to see that everyone is still getting along.

Filed Under: Technology

Russian Cosmonauts to Activate Space Station’s New Robotic Arm

May 11, 2022 by Elizabeth Moseley

On Monday, April 18 (UPI), two Russian cosmonauts aboard the International Space Station completed the first of two spacewalks to activate the station’s new European Robotic Arm. The two Cosmonauts were Oleg Artemyev and Denis Matveev, and they began their nearly seven-hour spacewalk at 11:01 a.m. ET ended at 5:37 p.m. ET, lasting for six hours and 37 minutes. During Monday’s spacewalk, the two removed the arm’s protective covers and installed handrails outside the Nauka module.

NASA Website live-streamed the cosmonauts during the spacewalk to activate the 37-foot long arm, which will be used to transport heavy items and help spacewalkers. The European Space Agency said that the new associate will navigate across the Russian segment of the space station and can carry a load of up to 17,000 pounds. It is one of three systems that can take and move large objects outside the ISS.

On April 28, a second spacewalk for the cosmonauts has been scheduled, when they will remove the arm’s protective thermal blankets and test its mobility. People agreed that the two spacewalkers were on their Final task to continue preparing the European Robotic Arm for operations on the station. On April 28, the cosmonaut pair will remove thermal blankets used to protect the robotic arm when it launched last year and the Nauka module. Matveev and Artemyev will also flex the robotic arm’s joints, release restraints and test its grappling ability. It was the first spacewalk for Matveev and a fourth for veteran spacewalker Artemyev.

Also, spacewalks are planned to continue to outfit the European robotic arm and activate Nauka’s airlock for future spacewalks,” Nasa said. Additional spacewalks are scheduled to continue to equip the European robotic arm and activate Nauka’s airlock for future spacewalks,” Nasa said. Last month, the Russian module arrived at the space station as Moscow replaced the Piers module that disintegrated into the atmosphere during re-entry. Eleven spacewalks are planned to ready the Nauka (the Russian word for ‘science’) module. The Nauka Module, which docked at the space station in July, will serve as a research lab, storage unit, and airlock for the Russian segment.

The module led to a major mishap on the station hours after arrival as its jet thrusters fired, inadvertently throwing the flying outpost out of control. Vladimir Solovyov, the designer general at Energia, a Russian space agency company, sought to reassure international partners that the incident had been contained and said cosmonauts would have Nauka the module up and running soon.

When Asked how the geopolitical tensions with Russia have affected life on the space station, NASA astronaut Dr. Tom Marshburn said during a Friday news conference that it’s been a “collegial, amicable relationship together up here, and we’re working together.”
HE SAID the NASA crew and Russian cosmonauts regularly share meals and watch movies together. “We rely on each other for our survival,” Marshburn said, “It is a dangerous environment. And so we go with our training; we go with recognizing that we are all up here for the same purpose: to explore and keep this space station maintained.”

Filed Under: Technology

Apple Announces The Dual-port 35W Fast Type C Charger for iPhone and iPad

April 11, 2022 by Samuel Roan

Apple is behind its competitors when it comes to rapid-charging devices. Oppo, Xiaomi, and OnePlus offer chargers that can charge up to 80W, while Apple’s adaptor charges at only 20W. Apple may be developing a 35W charger. It might also include a dual Type C wall adapter that allows customers to charge up to two iPhones simultaneously.

The new 35W dual-port fast type C charger is designed to charge your iPhone and iPad at a rapid pace. With its two ports, you can charge both your device and your Apple Watch simultaneously. This charger is also backward compatible with all previous generation Apple devices, so you can continue to use it with your older devices.

Reports claim that this is the first time Apple has stated it uses a dual charger. They cite a support document on Apple’s website. Apple quickly removed the support file, making it unclear when or if the charger will be available. Before it was destroyed, the paper contained some details about the charger.

The new charger is designed with a sleek, minimalist style that will complement any Apple device. Apple has announced a new dual-port fast type C charger for the iPhone and iPad. The charger is said to be able to charge both devices at speeds of up to 35w. This is great news for those who want to use one charger to power their iPhone and iPad simultaneously.

The paper described the capabilities and characteristics of the USB-C adapter, stating that it can be used with an Apple 35W Dual USB C Port Power Adapter (not included).

The charger is compatible with all Apple devices that support type C charging. Apple has just announced a new charger for its devices – the Dual-port 35W Fast Type C Charger. This charger is designed to be faster than other chargers currently on the market and is perfect for those who use an iPhone or iPad often. With this charger, users can charge their devices at a much faster rate than before, which will help to ensure that they have enough battery life when they need it most.

Attach a USB-C cable via one of the ports to the power adapter, then lengthen the prongs if necessary and connect the adapter to the outlet. You should ensure that the outlet can be disconnected quickly.

Connect the opposite end of the cable to your device. The 35W dual Type C charger will be the fastest iPhone charger. The iPhone 13 Pro Max supports 27W charging.

Customers will be able to quickly charge their iPhones, charge two iPhones simultaneously, or charge both the Apple Watch, iPhone, and iPad at the same moment if the charger is released.

It was not mentioned in the support document that the cable would be available. This implies that consumers will need to buy the cord separately if the charger is made available. Rumors have circulated that Apple uses GaN (gallium Nitride) chargers. These are smaller and faster than traditional ones.

If you are in the market for a new Apple charger, the dual-port 35W fast type C charger is a great option.

If you’re looking for a new Apple charger, the dual-port 35W fast type C charger is a good option. This charger has two ports that can charge your iPhone or iPad quickly. It also has a sleek design and is easy to use.

Filed Under: Technology

SphereX Mission to Probe Cosmos History of Our Origin; Will Be Launched April in 2025

March 31, 2022 by Samuel Roan

On April 25, 2025, the SphereX mission will launch from Earth to explore the cosmic history of our origin. The $100 million spacecraft will travel to the outskirts of our solar system where it will explore far-off planets and their moons for evidence of extraterrestrial life. The sphereX mission will also search for and characterize nearby planetary bodies, including exoplanets and comets. The mission will be able to detect water in the atmosphere of comets and possibly identify the fingerprints of living organisms on Earth-crossing asteroids.

SpaceX is ready to launch the Spectro-Photometer For the History of the Universe Epoch of Reionization and Ices Explorer SPHEREx mission in April 2025. This mission’s ideology was derived from James Webb Telescope. This mission will examine the facts surrounding the first seconds of the big bank’s creation. Continue reading to learn everything you need to know about SpaceX’s preparations for the mission.

What We Know About Our Cosmic History

The universe is vast and ancient. Scientists have uncovered a wealth of information about our cosmic history in recent years. Here’s what we know so far: The universe is expanding and cooling. It was once filled with hot, dense gas and plasma, but over time, the force of gravity pulled this material together to form stars and galaxies. The universe is estimated to have been around for around 13.8 billion years, though it may have existed for even longer than that. There are approximately 100 billion galaxies in the observable universe, and each one contains up to 100 million stars. Our own Milky Way galaxy contains between 100-400 billion stars! The first signs of life appeared on Earth about 3.8 billion years ago, courtesy of proteins that built simple cells from organic molecules such as carbon dioxide and water vapor.

The Technologies That Will Probe Our Origins

The search for our origins has been ongoing for centuries, and there are many technologies that are currently being used to probe back in time. Some of the most popular methods include archaeology, paleontology, and astronomy. These techniques allow us to explore past cultures and learn more about the origins of life on Earth. In addition to these traditional methods, there are also newer technologies that are being developed to help probe deeper into our past. These include gene sequencing and artificial intelligence. As we gain a better understanding of our history, we can continue to build a better future.

Information about the mission

SpaceX claims that the SphereX mission will launch in 2025. It will explore a range of space development issues. It will survey the entire sky to learn how galaxies were formed and their evolution. The mission will not only use computer-generated instructions but will also explore the actual usage of hardware. SpaceX has confirmed that the probe will study the fundamental of the sky within six months. This probe will also create a map of all the cosmos, something that has never been done before.

Learn more about SPHEREx

SphereX is a space telescope that’s truly exceptional. It’s essential for observing large areas of the sky and examining many objects at once. SphereX can cover 99 percent of the surrounding area in six months, according to Nasa. This is a significant improvement on the Hubble telescope, which managed only 0.1 percent of the sky in its 30 years of operation. SphereX will help researchers calculate the amount of life-habitual elements in the enormous clouds that are responsible for the evolution of new stars and their planetary system.

Conclusion: What the SphereX Mission could mean for our understanding of cosmic history

What if the universe is just one big cycle of life and death? What if our planet and all its inhabitants are just a temporary stop on an ongoing cosmic journey? These are some of the questions that Jason Wright, an astronomer at Pennsylvania State University, hopes to answer with his new research project: SphereX.

SphereX is a space telescope that will use a series of highly sensitive cameras to study the farthest reaches of the universe. Wright and his team believe that their findings could help us to better understand our place in the cosmos and the cycles of life and death that we see play out on Earth time and time again.

Filed Under: Technology

WhatsApp Users Will Be able to Post Message Reactions

March 23, 2022 by Jeffrey Herrera

WhatsApp is testing a new feature that will allow users to post message reactions.

WhatsApp is testing a new feature that will allow users to post message reactions. The feature, which is currently being tested with a small number of users, will allow people to express their opinions about messages with emojis.

If the test is successful, the reaction feature will be rolled out to all WhatsApp users. The reaction icons will be displayed below messages and users will be able to tap on them to see the reactions of other people.

What are reactions? Reactions are a way of showing emotions towards a message. There are six different reactions that users can choose from: love, haha, wow, sad, angry, and thumbs up.

Reactions are a way of showing emotions towards a message. There are six different reactions: thumbs up, heart, haha, wow, sad, and angry. The reactions are used to show the user’s feelings about the message.

The reactions were first used on Facebook in 2015. Facebook added the reactions to make it easier for users to show their emotions. The reactions were also added so that users could respond to messages faster.

WhatsApp added the reactions in February 2017. The company added the reactions so that users could respond to messages faster. WhatsApp also added the reactions so that users could show their emotions towards messages.

WhatsApp is introducing a new feature that will allow users to post message reactions. The feature is expected to be available in the next few weeks. Reactions can be used to show emotions such as love, laughter, happiness, sadness, and anger. The reactions will be added above the message text and will be visible to all participants in the conversation.

The new reaction feature was first announced by CEO Mark Zuckerberg in February of this year. At the time, he said that the feature would be tested in India before it was released worldwide. The company has been working on the feature for some time and it is now ready for release.

Reactions will give WhatsApp users another way to communicate with each other. They can use reactions to express their feelings when they cannot type a response or when they want to add more emotion to their message.

WhatsApp has not yet announced a release date for the reaction feature but it is likely that it will be released in the near future.

What is the worst thing about WhatsApp? It’s the inability to quickly post a response to someone’s message. I have to choose whether to send an emoji or write a completely unnecessary reply. This feature is available in Facebook Messenger as well as other chat apps like Signal and Viber. However, it is not available on WhatsApp.

This could change in the near term. WABetaInfo has announced that the reaction feature is now available for users who have downloaded the latest WhatsApp beta version 2.22.8.3.

This feature is very similar to other chat apps. The emoji options include thumbs up and heart, pray, laughter face, crying face, and surprised face.

Only a few users have this option, it seems. WABetaInfo says that all users should be able to see the reactions. This makes it even more frustrating for those who aren’t able to use them.

Although there are no guarantees that the feature will be available for everyone, WhatsApp seems to have been working on it over months so that moment is likely to come.

Filed Under: Technology

Google is ending its legacy-free G Suite plan on July 1

January 20, 2022 by Spencer Edward

Google’s productivity suite has been called many things over the years. Google Apps was originally called Google Workspace. G Suite is now Google Apps. The company has also offered many options to access the software over that time, including new subscription plans and discontinuing older ones. It plans to retire a tier that survived the suite’s recent rebranding.

9to5Google found an email from the company in which it informed Workspace administrators that G Suite legacy free edition will no longer be available after July 1, 2022. These users will be transitioned to paid accounts by the company on May 1. Google claims it will automatically choose a subscription plan for users who have not chosen one by May 1. It will also consider their current usage in making that decision. Anyone or any organization that moves to a paid subscription plan will not be charged for at least two months. The company states that it will suspend accounts of individuals or organizations who don’t provide billing information by July 1.

Monthly fees for enterprise and business Workspace accounts start from $6 per user. The company will offer deep discounts to anyone affected by this decision. The move won’t affect Gmail, Docs, and Sheets users who have a Google account. Google will offer its free Workspace plans to schools and nonprofits that are eligible for the Fundamentals tier. This isn’t changing, and organizations with legacy G Suite Basic, Business, Educational, or Nonprofit subscriptions don’t have to worry about a surprise bill.

Filed Under: Technology

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 8
  • Go to Next Page »
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. Office No 8, 3rd Floor, Aston Plaza, Katraj - Ambegaon Road, Ambegaon BK, Pune, Maharashtra, India. Pin- 411046

Powered by Prudour Network

Copyrights © 2025 · Chemical Market Reports. All Rights Reserved.