Tech Archives - Stock Sector

GettyImages-901599596.jpg

Stock SectorOctober 18, 20188min6


More or less since Nietzsche declared God “dead” nearly 140 years ago, popular wisdom has held that science and religion are irreparably misaligned. However, at a recent conference hosted by the Vatican, I learned that even in the era of artificial intelligence and gene splicing, religious institutions and leaders still have much to contribute to society as both moral compass and source of meaning.

In April this year, the Vatican launched Unite to Cure: A Global Health Care Initiative at the Fourth International Vatican Conference. This international event gathered some of the world’s leading scientists, physicians and ethicists — along with leaders of faith, government officials, businesspeople and philanthropists. The goal was to engage about the cultural, religious and societal implications of breakthrough technologies that improve human health, prevent disease and protect the environment. I had the privilege of participating as a board member of the XPRIZE Foundation.

We are living at a phenomenal point in human history. It’s a moment when our machines are flirting with godlike powers. AI and ever-accelerating innovations in medical technology are enabling humans to live longer than ever. Yet with increased machine capabilities and human longevity come heavy questions of morality and spirituality.

When bodies live longer, so do the souls inside of them. What are the spiritual implications for people who are given an additional 30 or even 50 years of life? Is enhanced longevity meddling with creation, or a complement to it?

As technology disrupts the way we relate to the few remaining physical and spiritual mysteries of humanity, it also disrupts the way we embrace religion.

It is here, at this nexus of technology and spirituality, that the Vatican wisely decided to bring together thinkers from both science and faith.

It was humbling to sit inside the tiny and unconventional country that we call Vatican City, surrounded by the world’s leading scientists, ethicists, venture capitalists and faith leaders. We talked about regenerative medicine, aging reversal, gene editing and cell therapy. We discussed how humanity is shifting from medicine that repairs and remediates toward a system that overtly changes our physical composition. We discussed the incredible augmentations available to the disabled — for example 3D-printed prosthetic limbs. How long before the able-bodied begin to exploit these enhancements to augment their own competitive advantage in an increasingly crowded world? To what extent, if any, should society attempt to control this paradigm shift?

One of the more interesting discussions surrounded how to ensure that humans don’t just live longer, but also better.

What exactly does “living better” entail? Does it imply physical comfort, spiritual well-being, financial security? At this moment in history, we have more instant and unlimited information than the kings and queens of ancient Greece or the Middle Ages could have ever imagined. That technological power is allowing more and more people to become enormously wealthy, at a speed and magnitude that would have been unthinkable for anyone other than a monarch just a century ago.

But are these people living “better”?

In as much as longer-living humans use their accrued wealth to support and encourage the creation of projects as audacious and ambitious as — for example — the Coliseum, I believe the answer is yes. If longevity and riches encourage the average human being to create change on a scale that matches the enormous potential of our exponential times — all the more so.

Yet, others in the room had a different take. For many religious leaders, “better” meant a more sharply defined relationship with God. For some scientists, “better” meant a life that creates fewer emissions and embraces better and smarter technology.

It was astounding, really. In one of the most hallowed spots on earth for the Catholic Church, sharing oxygen and ideas with cardinals and future saints, stood the world’s leading researchers, scientists and corporate leaders, who hold in their hands the technology to extend human life. Together with the clergy of the world’s great monotheistic religions, we held an open dialogue about how to improve the heart and soul of human life while the technology we create continues to advance beyond our ancestors’ wildest imaginations.

As technology disrupts the way we relate to the few remaining physical and spiritual mysteries of humanity, it also disrupts the way we embrace religion. In this conference, the Vatican very correctly leveraged the opportunity for organized religions to disrupt themselves by thinking about how they can be meaningful contributors to the conversation on spiritual, physical and mental well-being in the future.

Let’s block ads! (Why?)



Source link


Conference-crowd-New-York-Finovate-Sep-2018-photo-by-Joe-McKendrick-e1539881518596-225x300.jpg

Stock SectorOctober 18, 20187min7


So far, the impact of information technology on overall productivity has been a mixed bag, and even disappointing. IT has been reshaping workplaces in a big way since the 1980s, yet, there appears to be little to show for all this progress — many argue that technology may even inhibit productivity growth.

There are many reasons why the proliferation of technology doesn’t automatically translate to productivity growth. For one, “technological disruption is, well, disruptive,” Harvard’s Jeffrey Frankel observed in a recent World Economic Forum report. “It demands that people learn new skills, adapt to new systems, and change their behavior. While a new iteration of computer software or hardware may offer more capacity, efficiency, or performance, those advantages are at least partly offset by the time users have to spend learning to use it. And glitches often bedevil the transition.” Add to that the fact that individuals and organizations are being inundated with security issues and cyberattacks, and things get even more gummed up. Finally, there’s the fact that people are being inundated with information and distractions by the minute.

Irving Wladawsky-Berger recently weighed in on this question in a recent Wall Street Journal piece, observing that technology is often brought in to automate existing processes — which essentially merely speeds up what the business is already doing. That was the dilemma with the first wave of IT in the 1980s into 1990s, and is likely what we’re experiencing now.  The lesson the first time around was what Wladawsky-Berger identifies as the “Solow Paradox” —  that “companies realized that using IT to automate existing processes wasn’t enough,” Wladawsky-Berger points out. “Rather, it was necessary that organizations leverage technology advances to fundamentally rethink their operations, and eliminate or re-engineer business processes that did not add value to the fundamental objectives of the business.”

More technology, more complexity.Photo: Joe McKendrick

That’s even more the case these days, as organizations pour more money into the promise of digital transformation, expecting overnight rewards. “We are experiencing a kind of Solow Paradox 2.0, with the digital age more around us than ever except in the productivity statistics. There are several reasons for this lag. First of all, we’re in the early deployment years of major recent innovations, including cloud computing, IoT, big data and analytics, robotics, and AI and machine learning.”

Will AI eventually increase productivity? Overall, it’s still unknown, but there are benefits that can quickly be realized on an individual or departmental-level scale. A recent report out of Constellation Research sponsored by Microsoft, states that AI may help boost personal productivity in a number of profound ways.  AI advances the traditional software model, “allowing applications to learn and improve over time, without needing to roll out a new version,” relates Alan Lepofsky, the report’s author. “AI-enhanced software can assist in a variety of processes, from automating mundane tasks such as scheduling meetings to filtering through thousands of documents in order to recommend the best content.”

AI may be a catalyst for productivity because it eases collaboration in workplaces. Here are some of Lepofsky’s ideas on how this is happening:

AI promotes more natural interaction. “Perhaps the subtlest yet important manifestation of AI is how people can now interact with devices and applications in ways that mimic human interaction,” Lepofsky states, pointing to user-friendly interfaces such as natural-language processing as examples of this.

AI helps to automatically categorize information. Until recently, tagging information or images was a manual process done by a dedicated few. “AI greatly assists in this process, either using image recognition to add tags to pictures or scanning documents to extract keywords,” he observes.

AI automates recommendations. “As AI learns our patterns and preferences, tools can start to recommend answers or replies for us.” These recommendations will eventually become automatic actions, Lepofsky says.

AI inspires creativity. “Not everyone has an eye for color, fonts, layout or other important elements of design. What if your applications could perform those functions for you?” Employees can become storytellers, he adds.

AI extracts insights. “One of the greatest benefits of AI is its ability to look at massive data sets and find patterns and trends.,” says Lepofsky.  AI can extract the right and relevant background data to “help knowledge workers or first-line employees make better-informed decisions and recommendations.”

Technology appears to be more overwhelming than productivity-boosting. AI may help sort things out. Here’s hoping.

Let’s block ads! (Why?)



Source link


im-31578

Stock SectorOctober 18, 201813min9


A lawyer for Huawei denied the allegations, which were made in a countersuit in response to a complaint Huawei itself had filed last year.


Photo:

aly song/Reuters

An escalating battle between the U.S. and China for supremacy in semiconductor technology is playing out in federal court between Chinese telecommunications giant Huawei Technologies Co. and a Silicon Valley startup backed by

Microsoft
Corp.

and

Dell Technologies
Inc.


DVMT -0.67%

CNEX Labs Inc., based in San Jose, Calif., and its co-founder Yiren “Ronnie” Huang alleged in Texas federal court this week that Huawei and its Futurewei unit have engaged in a multiyear plan to steal CNEX’s technology.

A lawyer for Huawei denied the allegations, which were made in a countersuit in response to a complaint Huawei itself had filed last year,  accusing CNEX and Mr. Huang—its former employee—of stealing its trade secrets and demanding detailed information about CNEX’s tech.

The ongoing legal dispute is an unusual example of a Chinese company attempting to use the U.S. court system to access technology it claims had been stolen from it by an American firm.

The intellectual property in dispute—solid-state drive (SSD) storage technology—allows massive data centers to manage the ever-growing volume of information generated by artificial intelligence and other advanced applications, prompting investment in CNEX from the venture-capital arms of Dell and Microsoft, which operate leading storage and cloud platforms, respectively.

Mr. Huang, a Chinese-born U.S. citizen, is at the center of both the American and Chinese companies’ allegations against each other, underscoring the increasingly intertwined nature of the talent pool for developing cutting-edge technologies.

After attending universities in Shanghai and Michigan, Mr. Huang worked in Silicon Valley for nearly 30 years—including nearly a dozen at

Cisco Systems
Inc.

—CNEX says in court filings. He was named an inventor on 9 U.S. patents and 13 pending U.S. patent applications that have been assigned to CNEX, the filings say.

In 2011 Futurewei, based in Plano, Texas, hired Mr. Huang to work at its Santa Clara, Calif., offices due to his expertise in SSD technology, but the Chinese firm refused his offer to sell to Futurewei his pre-existing intellectual property, CNEX alleges. Later the Chinese firm tried to get him to sign it away under an employment agreement, but he refused to do so, CNEX says.

After finding Futurewei lacking in entrepreneurial culture, Mr. Huang left in May 2013 and promptly co-founded CNEX, the U.S. firm says in its filings. Huawei immediately began monitoring the startup, including by feigning interest in becoming a customer in order to try to improperly gain access to its technology, CNEX says.

Huawei then sued CNEX and Mr. Huang, alleging he had stolen its tech and recruited 14 of its employees to join him at his new firm and to bring Chinese tech with them. CNEX admitted in its response that those individuals are now its employees but denied Huawei’s allegations that their hiring was part of a conspiracy.

CNEX said Huawei’s litigation against it is “premised on bogus claims of trade secret misappropriation and false claims of ownership of CNEX’s proprietary technology” and that it “represents the latest in a long line of underhanded tactics waged by Plaintiffs in their ongoing effort for Chinese technological dominance.”

As part of the discovery process, Huawei asked the court in a filing earlier this month to force CNEX to turn over all of its technical documents, including “detailed engineering specifications, testing plans, source code design documents, source code flow charts, hardware design documents and schematics, hardware and software bug status reports, engineering personnel responsibility designations, client product delivery details, and production schedules.”

Though it dominates telecom equipment markets in Asia, Europe and elsewhere, Huawei has been virtually locked out of the U.S. since a 2012 congressional report alleged that its gear, and that made by its smaller Chinese rival

ZTE Corp
.

, could be used by Beijing to spy on Americans. The companies have denied the allegations.

Scrutiny on the two Chinese telecom behemoths has intensifiedin recent months as the Trump administration and a bipartisan coalition in Congress move deliberately to counter what they view as years of unbridled Chinese aggression across an array of military, political and economic fronts.

The U.S. Chamber of Commerce has long criticized China’s theft of intellectual property from American businesses, including with a scathing report on Beijing’s Made in China 2025 policy, a blueprint for turning China into a global manufacturing leader.

The Committee on Foreign Investment in the U.S., which reviews deals for national security concerns, cited Huawei’s dominance in the telecommunications-equipment industry in advising President Trump to block Broadcom Ltd.’s $117 billion hostile takeover bid for U.S. semiconductor giant

Qualcomm

in March. The Treasury-led committee, known as CFIUS, wrote that they feared such a deal could weaken Qualcomm, which competes with Huawei for wireless-technology patents.

The U.S. is using the committee as well as an ongoing update of export control laws, to protect critical technology such as semiconductors from being acquired by China.

Earlier this year, the Pentagon halted sales on U.S. military bases of smartphones made by Huawei and ZTE. The Justice Department is also investigating whether Huawei violated U.S. sanctions on Iran.

Write to Kate O’Keeffe at kathryn.okeeffe@wsj.com

Let’s block ads! (Why?)



Source link


canva-photo-editor-59-500x500.png

Stock SectorOctober 18, 20189min8


Online ordering technology has completely transformed the restaurant industry. However, with its widespread use, problems have developed and these problems can financially break a business.

The situation reminds me of a conversation that a business owner once had with a banker to get a loan:

Owner: “We make the best widgets and we sell them at a competitive price.”

Banker: “So, how much does one of your widgets cost to buy?”

Owner: “Only two dollars!”

Banker: “How much does it cost to make one of your widgets?”

Owner: “Each widget only costs three dollars to make.”

Banker: “So, you’re losing a buck for every widget you sell! How do you plan to make money?”

Owner: “Volume!”

Unfortunately, this mindset mirrors that of many restaurant owners who use large third-party portals to process their online takeout and delivery orders, where it’s possible to lose more money with each order.

The quest to get more customers can be obsessive for restaurant owners. Meanwhile, the increasing demand for convenience is pushing customers to order more and more online, for both takeout and delivery. Research by Morgan Stanley projects that “40% of total restaurant sales — or $220 billion could be up for grabs by 2020, compared with current sales of around $30 billion.”

Restaurant owners are, therefore, compelled to join these portals in fear their competitors will steal their customers and the revenues they represent. The catch: These portals generally charge restaurants anywhere between 15% and 30% per order.

Measuring The Current Model

Why would restaurants pay such huge fees? Well, the sales pitch from portals is often that “it’s incremental business” and restaurants can afford high fees on orders they wouldn’t have otherwise received.

In theory, any incremental business will help a restaurant’s bottom line. But what if it’s not really incremental? What happens when a customer who already knows about a restaurant, and has previously ordered there, orders again using a third-party portal? While the customer is still technically ordering from the restaurant, the restaurant just lost a customer.

Restaurants are cannibalizing their own business. It’s simple: The higher the volume a restaurant has through a portal, the lower the profit percentage they make. If 25% of a restaurant’s total business is done through an online portal that charges 25%, that strips 6.25% away from its revenue. This can break a restaurant.

Unfortunately, the cost to attract more customers isn’t always factored into the decision making process. For many restaurants, the fees portals charge are unsustainable. Period.

So, why don’t restaurants just charge higher prices on a portal? Well, the ability of a restaurant to increase their pricing on a portal is virtually impossible. To the defense of third-party sites, it actually makes sense. How long would customers continue ordering from a portal if they knew they were paying substantially higher prices (to make up for the high service fees)? Not long, and poof — the portal’s business would evaporate.

These financial challenges cause some restaurant owners to have a “love-hate” relationship with these big online portals. Restaurants love the “additional” customers, but they hate the high fees. Other restaurant owners simply have a “hate-hate” relationship with these sites. They hate the feeling of being compelled to join and hate the high fees.

I’ve personally spoken with many restaurant owners about this. In just about every single case, the topic generates some very not-suitable-for-work language. Since my company, NetWaiter, provides a platform on which to integrate ordering systems (primarily non-portals) into a network of restaurant sites, we have had a front-row seat to the frustrations portals have caused restaurant owners.

Things aren’t all that bad though. For instance, if a restaurant doesn’t currently offer delivery, a service that provides delivery could expand the restaurant’s reach and accommodate customers they wouldn’t normally serve. The per-order economics are better because a restaurant doesn’t have the costs associated with paying delivery drivers (which can be horrendously expensive).

But before you relax, there’s a little more bad news: It’s hard for delivery services to make money by only charging customers a small delivery fee. The only way they can make money on an order is by offsetting the delivery fee with the sizable service fees charged to restaurants.

From my perspective, to accommodate restaurants, portals may need to shift their fees. Easier said than done because the only other party involved in this transaction is the customer. Customers can only absorb so much. Paying a $15 delivery fee on a $15 order isn’t going to work.

The Solution May Involve All Parties

An ideal solution may be for everyone to budge a little. Portals (or delivery services) lower their service fees to restaurants, customers pay higher delivery fees and larger minimum order sizes are put in place so the per-order economics can work out better for the portal (and restaurant).

Competitive forces may keep that type of shift from happening, though, because portals spend a lot of money to entice customers to use their service over any other. Charging customers higher delivery fees isn’t an attractive offer. Alternatively, some portals are offering lower rates to restaurants for online exclusivity. Essentially, if a restaurant agrees to only use one service, that service will lower the restaurant’s service fees.

Online ordering provides restaurants with huge benefits, and portals can have a place at the table, but some changes clearly need to be made for everyone to be happy. The online ordering market is only getting bigger. As such, this problem can’t be ignored.

There is a solution out there, and once it’s found, it will likely be valuable for all parties. In the meantime, no business owner should implement technology if it could break their business.

Let’s block ads! (Why?)



Source link


18sl-usefultech1-articleLarge.jpg

Stock SectorOctober 18, 201831min9


Uncomplicated Technology, and Why It’s Always Worth Your Money

If you believe the marketing, you’d think every new gadget will change your life — but many are confusing to use or doomed to obsolescence. Here’s how to determine whether your purchase will stand the test of time.

Image
The WobbleWorks 3Doodler Create+ is so simple to use, you may not even consider it a gadget, much less a 3D printer, making it a prime example of “uncomplicated” technology.CreditCreditDaniel Cowen/WobbleWorks Inc.

By Terry Sullivan

  • Oct. 17, 2018

My daughter Liz loves gadgets. She even recently reviewed one: the WobbleWorks 3Doodler Create+, a device that’s part pen, part 3-D printer. (Here she is giving a demo of it.) What struck me most is because it’s designed as a toy, it slips past our notice as a “gadget.” Yet it most certainly is, letting you create objects in the same manner (and using many of the same materials) as a regular, complicated 3-D printer.

As my daughter said, it’s “3-D printing in real time.” But the pen is fun to use and a great example of “uncomplicated” tech. There’s no byzantine software to learn. In fact, you don’t need a computer to use it, and you can get up-and-running quickly.

In many ways, it’s a model of how all consumer technology should be, and what sets the tech that stands the test of time from the gear that’s forgotten in a year or two: It’s uncomplicated.

It reminded me of two other products: First, Pure Digital’s point-and-shoot video camcorder, the Flip pocket cam, which was reviewed in The Times in 2008. I nearly refused to review it then because, as I told my editor, it was too simplistic and couldn’t compete with full-size camcorders. But when I did test it, I was impressed, at least with how easy it was to use. I wrote back then, “Just press the red button to start and stop recording. Delete what you don’t want. Play back what you want to review. What could be simpler?” I remember a good friend looking at the back of the Flip and saying, “Oh, just press the big red button to record, right?” That’s intuitive design.

The second product is Apple’s GarageBand, a music creation mobile app for the iPad and iPhone. Before it came out, I’d been working hard to learn full-featured but poorly designed digital audio workstation software programs on my PC, and I was incredibly frustrated. I’d spend months to master a technique, only to lose heart and give up. When Apple introduced GarageBand, I produced a full song, start to finish, in hours instead of weeks or months. In particular I admired how, in the iPad version, when you tap a question mark icon, precise text descriptions appear all over the screen, annotating key features. As an amateur musician, tips and help guides like this are essential to the user experience.

But oftentimes consumer technology is just the opposite: It’s too complicated. I asked Jeffrey Zeldman, a web designer, author and entrepreneur, why this is the case. In part, he explained, it’s because tech has a legacy of being complicated.

“When I started in web design, computers were for nerds, and people took pride in how difficult everything was,” Mr. Zeldman said. Those in technology in the late 1980s and early 1990s were familiar with the following scenario: An engineer makes a product, and then adds features that a manager said their customers wanted. It worked, but it was almost impossible to figure it out intuitively. In other words, “Nobody was really thinking about what the consumer needs,” Mr. Zeldman said.

But Mr. Zeldman said that in the past 20 years, there has been a shift. The products that have been most successful, particularly digital products and services, “haven’t been the most advanced, sophisticated or beautiful, necessarily,” he explained, “but they were the ones that understood what consumers wanted to do and enabled them to do it.”

This is what made Amazon, Google and Apple so powerful. This notion can also be used to discover why products fail. Take Microsoft’s misstep in redesigning the Windows 8 operating system: The move was almost universally panned for removing the hallmark “Start” button users had come to rely on. The new “Metro” tiled interface was certainly sophisticated and attractive, reflected how the market was changing, and incorporated mobile-friendly design elements. But it misunderstood what consumers wanted — namely a consistent starting point.

The key to determining whether some new piece of technology, whether it’s a gadget, app or website, will work for you — and better yet, stand the test of time — is how uncomplicated it is, and how easy it is to do what the product is designed to do. Here are a few factors to help you evaluate before you sign up, or spend your money.

Before you buy something, it should be pretty obvious what it does and, generally, how it works. “When a consumer is frustrated on a website,” Mr. Zeldman said, “that means a designer didn’t do their job.” Which means designers don’t always do a good job. It’s not your fault. Product designs should make your experience simple and clear. This can be expanded to any tech product, from desktop computers and TVs to wearable fitness trackers and apps. If you’re looking to buy a wireless speaker but the controls aren’t clearly labeled and it’s complicated to pair with your mobile device, then consider another — there are many to choose from.

Inkjet printers have some impressive features, but the ink cartridges they require infuriate me because it’s as if they’ve been intentionally designed to confuse you. Over the years, most printers I’ve owned will indicate that a cartridge will need to be replaced, even though I can still print pages of text. The settings offer no truly accurate measurement of how much ink is left, even though I can print many pages. That means if I throw my ink cartridge out too early, I’m essentially throwing money away. That’s just poor, confusing product design — or, even worse, purposely vague product design to get you to needlessly spend money.

For most things, you may only need a very simple product or app. Those who may want more sophistication will have to spend time finding a full-featured alternative. “If there’s a learning curve,” Mr. Zeldman said, “does it teach you a new way to think about that subject and make you better at what you do?” If so, that product might be worth it. If you buy a high-end camera, you can learn how to shoot a vast array of creative photos. But if you’re just shooting simple selfies, you might be wasting your money.

“If I’m downloading an app from Apple’s app store, I read the reviews first. I study the screenshots there, since they’re representative of the app,” Mr. Zeldman said. “Maybe I’m interested in downloading a photo app, but if the filters are ugly in the screenshot, I know it’s not for me.” It’s also O.K. to stop using a cheap or inexpensive app after you’ve downloaded it. “One of the great things about apps is that you can also try out a limited-feature version (only certain elements of the app are turned on), and see if it works for you before you decide to pay for the full product,” Mr. Zeldman said. It’s difficult, but not impossible, to try out hardware like a camera or speaker. See if you can borrow a friend’s device to try it out. Or try renting one. With a large item, like a TV, visit a friend who has bought the same (or similar) product and see how it works.

As a teacher, I find this to be a valuable gauge of how much the company really cares. A stand-alone scanner I once bought had a manual that stopped midway through the instructions — not surprisingly, I rarely used that scanner. Conversely, a digital camera I once reviewed had a number of simple, well-illustrated tip sections built right into the camera menu, which were invaluable to a novice and even helpful for experienced shooters.

Obviously, this is easier if you’re not out a lot of money. “If an app isn’t helping me,” Mr. Zeldman said, “I just wipe it off my phone and don’t give it another thought.” The reviews might be good and it might be well regarded, but don’t bother with it if it doesn’t suit your needs. As for pricey gadgets, you’ll want to do research beforehand. However, the same philosophy applies: After you’ve done your research, if the product in question doesn’t suit your needs or budget, walk away. Return it or, better yet, sell it.

If you’re looking for tech that’s uncomplicated, you can’t really make an assessment if you don’t know what’s available or what’s changed. If you were looking for a high-end camera 10 years ago, the best viewfinders were through-the-lens viewfinders found on digital single-lens reflex cameras. At the time, electronic viewfinders were mediocre, grainy and inferior. Today’s electronic viewfinders, though are just about as clear and sharp.

Terry Sullivan is a journalist who covers consumer electronics, technology services and their intersection with the visual arts. Follow him on Twitter and Instagram.

Let’s block ads! (Why?)



Source link


0_1.jpg

Stock SectorOctober 17, 201814min10


In 2007, The New York Times published an op-ed titled “This is Your Brain on Politics.” The authors imaged the brains of swing voters and, using that information, interpreted what the voters were feeling about presidential candidates Hillary Clinton and Barack Obama.

“As I read this piece,” writes Russell Poldrack, “my blood began to boil.” Poldrack is a neuroscientist at Stanford University and the author of The New Mind Readers: What Neuroimaging Can and Cannot Reveal about Our Thoughts (out now from Princeton University Press). His research focuses on what we can learn from brain imagining techniques such as fMRI, which measures blood activity in the brain as a proxy for brain activity. And one of the clearest conclusions, he writes, is that activity in a particular brain region doesn’t actually tell us what the person is experiencing.

The Verge spoke to Poldrack about the limits and possibilities of fMRI, the fallacies that people commit in interpreting its results, and the limits of its widespread use. This interview has been lightly edited for clarity.

When did “neuroimaging” start to be everywhere?


My guess is around 2007. There were results coming out around 2000 and 2001 that started to show that we can probably start to decode the contents of somebody’s mind from imaging. These were mostly focused on what the person was seeing, and that doesn’t seem shocking, I think. We know a lot about the visual system but it doesn’t seem uniquely human or conscious.

In 2007, there were a number of papers that showed that you can decode people’s intentions, like whether they were going to add or sutract numbers in the next few seconds, and that seemed like really conscious cognitive stuff. Maybe that was when brain reading really broke into awareness.

A lot of your book is about the limits of fMRI and neuroimaging, but what can it tell us?

It’s the best way we have of looking at human brains in action. It’s limited and it’s an indirect measure of neurons because you’er measuring blood flow instead of the neurons themselves. But if you want to study human brains, that works better than anything else in terms of pinpointing anything.

What are some of the technical challenges around fMRI?

The data are very complex and require a lot of processing to go from an MRI scanner to the things you see published in a scientific paper. And there are things like the fact that every human brain is slightly different and we have to work them all together to get them to match. The statistical analysis is very complex and there have been a set of controversies in the fMRI world about how statistics are being used and interpreted and misinterpreted. We’re doing so many tests, we have to make sure we’re not fooling ourselves with statistical flukes. The rate of false positive we try to enforce is 5 percent.

What about generalizability? How well can you generalize from one person’s results to say, “this happens in all humans”?

It depends on the nature of what you’re trying to generalize. There are large-scale things that we can make generalizations about. Pretty much every healthy adult human has visual processing going on in the back of the brain, stuff like that. But there’s a lot of fine-grained detail about each brain that gets lost. You can generalize coarse-grained things, but the minute you want to dig into finer-grained, you have to look at each individual more closely.

In the book, you talk a lot about the fallacy of “reverse inference.” What is that?


Reverse inference is the idea that presence of activity in some brain area tells you what the person is experiencing psychologically. For example, there’s a brain region called the ventral striatum. If you receive any kind of reward, like money or food or drugs, there will be greater activity in that part of the brain.

The question is, if we take somebody and we don’t know what they’re doing, but we see activity in that part of the brain, how strongly should we decide that the person must be experiencing reward? If reward was the only thing that caused that sort of activity, we could be pretty sure. But there’s not really any part of the brain that has that kind of one-to-one relationship with a particular psychological state. So you can’t infer from activity in a particular area what someone is actually experiencing.

You can’t say “we saw a blob of activity in the insula, so the person must be experiencing love.”

What would be the correct interpretation then?

The correct interpretation would be something like, “we did X and it’s one of the things that causes activity in the insula.”

But we also know that there are tools from statistics and machine learning that can let one quantify how well can you quantify something from something else. Using statistical analysis, you can say, “we can infer with 64 percent accuracy whether this person is experiencing X based on activity across the brain.”

Is reverse inference the most common fallacy when it comes to interpreting neuroscience results?

It’s by far the most common. I also think sometimes people can misinterpret what the activity means. We see pictures where it’s like, there’s one spot on the brain showing activity, but that doesn’t mean the rest of the brain is doing nothing.

You write about “neuromarketing,” or using neuroscience techniques to see if we can see the effect of marketing. What are some of the limits here?

It hasn’t been fully tested yet. Whenever you have science mixed with people trying to sell something — in this case, the people are trying to sell the technique of neuromarketing — that’s ripe for overselling. There’s not much widespread evidence really showing that it works. Recently there have been some studies suggesting you can use neuroimaging to improve the ability to figure out how effective an ad is going to be. But we don’t know how powerful it is yet.

Our ability to decode from brain imaging is so limited and the data are so noisy. Rarely can we decode with perfect accuracy. I can decode if you’re seeing a cat or a house with pretty much perfect accuracy, but anything interestingly cognitive, we can’t decode. But for companies, even if there’s just a 1 percent improvement in response to the ad, that could mean a lot of money, so a technique doesn’t have to be perfect to be useful for some kind of advantage. We don’t know how big the advantage will be.

One interesting point you make is that there are some issues with the increasingly common statement that addiction is a brain disease. What’s the issue here?

Addiction causes people to experience bad outcomes in life and so to that degree it’s like other diseases, right? It results directly from things going on in one’s brain. But I think calling it a “brain disease” makes it seem like it’s not a natural thing that brains should do.

Schizophrenia is a brain disease in the sense that most people behave very differently from someone with schizophrenia, whereas addiction I like to think of as a mismatch between the world we evolved in and the world we live in now. Lots of diseases, like obesity and type II diabetes, probably also have a lot of the same flavor.

We evolved this dopamine system meant to tell us to do more of things we like and less of things we don’t like. But then if you take stimulant drugs like cocaine, they operate directly on the dopamine system. They’re this evolutionarily unprecedented stimulus to that system that drives the development of new habits. So it’s really the brain doing the thing it was evolved to do, in an environment that it’s not prepared for.

Going back to reverse inference for a second, how long do you think it’ll be before we actually are able to decode psychological states?

It depends on what you’re trying to infer. Certain things are easier. If you are talking the overall ability to make reverse inference on any kind of mental state, I’m not sure that we’re going to be able to do that with the current brain imagining tools. There are just fundamental limits on fMRI in terms of its ability to see brain activity at the level that we might need to see it. It’s an open question and we’re certainly learning a lot about what you can predict and part of that is going to be development of better statistical models. Ultimately, fMRI is a limited window into the biology and without a better window into human brain function, it’s not clear to me that we will be able to get to perfect reverse inference with this tool.

Let’s block ads! (Why?)



Source link


jn6fdwajhx8ihi47gsws.jpg

Stock SectorOctober 17, 20184min9


Blockchain technology forms the foundation for cryptocurrencies such as Bitcoin, Dogecoin, and Ethereum, but it can be difficult to understand how it actually works. The Onion answers common questions about blockchain technology.


Q: How does blockchain work?

A: Do you want to talk science shit or do you want to make some fucking money?


Q: Who uses blockchain?

A: Ordinary folks in charge of million-dollar cryptocurrency accounts and diamond supply chains.

Advertisement


Q: How much is a blockchain?

A: It’s only $250 per block of chain. A steal, if you ask us.


Q: What is the benefit of using blockchain?

A: Provides a more efficient way for you to lose all your money at once.


Q: Is the system fully secure from hackers?

A: Nothing that bad has happened yet, so we’re just going to say yes.


Q: How do banks feel about blockchain technology?

A: As long as banks still find a way to exploit the poor, they couldn’t care less.

Advertisement


Q: Is there really child pornography encoded into Bitcoin’s blockchain?

A: Only a little bit!


Q: Would widespread use further entrench us in our dependence on technology without which we would be plunged into a horrifying new dark age?

Advertisement

A: Yes.

Let’s block ads! (Why?)



Source link


Rubin-Marc-cropped-1200x774.jpg

Stock SectorOctober 17, 20188min10


There is global interest in what a shared distributed ledger means for the accounting industry. With more than 7,000 members worldwide, one need look no further than the American Accounting Association — the premier organization of accounting academics—for  proof.

Dean Mark Rubin, Farmer School of BusinessMiami University

For more than 100 years, the AAA’s members have been responsible for training the next generation of accountants, and now they are making sure that their members are knowledgeable about blockchain technology. I spoke recently with Mark Rubin, Dean of Farmer School of Business, University of Miami (at Ohio), and the current AAA president, who told me that he sees a shift in the way accounting professors are teaching their students. Instead of focusing primarily on content, he said, they are emphasizing skills that will last their accounting students a lifetime, and that includes blockchain technology.

“Our students will make data-driven decisions to drive new business models,” said Rubin. The AAA is committed to helping serve a more prosperous global society, Dean Rubin explained. Accountants do this by ensuring that decision-makers, both internal and external to the organization, have the best financial (and other) data to make decisions about the allocation of resources.

As global finance is re-imagined on a shared, distributed ledger, the work being done by the AAA is part of a bigger-picture movement that will impact the way businesses make investments and allocate resources. Accountants devise the tools and rules with which to capture the economics of the organization. This is where blockchain technology comes in. Blockchain technology has the potential to provide decision-makers with cleaner and more complete data in near real time, which could lead to improved resource allocation, better investments, etc. Because the information is recorded to the blockchain (which is immutable), it is almost self-audited.

Traditionally, accountants have focused primarily on an organization’s journal entries and ledgers, and how the organization’s information flows through those ledgers. They consider the controls that are needed to assure the integrity of information. Shared ledgers on a blockchain will lead to new lines of inquiry in the organization and in the classroom.

In a blockchain-enabled world, accountants will re-examine tried-and-tested accounting processes. They will rethink the way audits are done, reconsider how to attest to data recorded to a blockchain, and re-evaluate how to verify the integrity of financial (and other) data that decision makers rely on. Accountants will broaden their scope as to what was traditionally considered accounting information and also what their role as accountants should be writ large.

Rubin conceded that these are early days for blockchain technology and, as such, it is premature to discuss specific blockchains. Nevertheless, the advent of blockchain technology raises myriad questions for the accounting profession which need to be explored in the classroom and by the academy.

To begin addressing these issues, the AAA hosted an Emerging Issues Forum on blockchain technology in September. Professors came to the conference with some knowledge about blockchain technology, and the firm belief that it will disrupt their industry. The academics were eager to learn key concepts about blockchain technology and how best to incorporate distributed ledger technology into their course curriculum. Some had already begun to teach it.

To leverage the momentum generated by the Forum, the AAA has planned a four-day intensive workshop for summer 2019 that includes a track on blockchain technology. Providing professors with a space to learn about blockchain technology will lead to peer-reviewed research on the intersection of blockchain technology and accounting processes, such as audits of the distributed ledger. Dean Rubin suggested that behavioral research into the way people view and use information that is derived from a blockchain would have value, especially understanding whether information recorded to a blockchain is perceived to be more or less credible than data that is derived from traditional accounting methods.

The AAA is not alone in its focus on blockchain technology. Earlier this year, the Accounting Blockchain Coalition (ABC) was launched to educate organizations on accounting issues related to digital assets and distributed ledger technology. In addition, the Association of International Certified Professional Accountants (AICPA), with more than 600,000 members worldwide, offers a certificate program in blockchain fundamentals for accounting and finance professionals.

This early work on blockchain technology has already begun to spur controversy in the accounting academy, with two distinct camps emerging. In one camp, proponents submit that blockchain technology enables the enterprise to capture rich transactional data (in a distributed ledger) which is used to summarize debits and credits. In the second camp, proponents believe that financial statements are created by capturing transaction details rather than summarizing financial data into journal entries. In my next article, I will delve into these camps to provide a fuller explanation of the two schools of thought and what they mean to the accounting profession and the rest of us.

Dean Rubin can’t predict how accountants will use blockchain technology in the future, but he asserted, “we need to keep our minds open and continually consider how these technologies will impact the accounting profession and the world at large.”

Let’s block ads! (Why?)



Source link


forbes_1200x1200.jpg

Stock SectorOctober 17, 20185min8


By Claire Rychlewski

As technology becomes more integral to healthcare and valuations continue to climb, the margin for error is razor-thin when it comes to assessing the health of a target company’s technology, dealmakers say.

If issues are not vetted in advance of a sale process, it can result in “very significant hits to valuations, and buyers walking away,” says Amherst Partners Managing Director John Patterson, who was speaking publicly at iiBIG’s Investment and M&A Opportunities in Healthcare conference in Chicago last week.

Patterson, whose firm specializes in middle-market advisory, and other panelists say that while technology has resulted in a  flood of healthcare IT assets hitting the market, it has also made sale processes that much more complex.

“The need for external help to assess technology has increased,” says Edward Francis, senior director in consulting firm West Monroe Partners’ Healthcare & Life Sciences practice. Buyers are working hard to determine whether a target’s technology offering is sustainable, and whether the company will have to invest in remediating regulatory compliance issues, he adds.

As the reimbursement and regulatory environment has become more onerous, so have compliance, privacy and cybersecurity issues, they say.

Patterson says dealmakers are responding by turning to subject matter experts more often, and notes that quality-of-earnings work has become more widespread even in lower-middle market processes.

Chad Neale, managing director at cybersecurity and technology risk assessment firm ACA Aponix, says buyers must focus on addressing federal HIPAA (Health Insurance Portability and Accountability Act) compliance issues in the first 90 days, noting that the presence of privacy and security experts, and a sound vendor management program are key.

As an example of the risks, Michael Sullivan, a managing director at consulting firm Berkeley Research Group, cites electronic health records (EHR) vendor eClinicalWorks, which paid $155 million last year to settle US Department of Justice charges that it falsely obtained certification for its EHR software. The DOJ said the software had an incomplete database of drug codes and drug interaction checks, and could not accurately track lab results, among other flaws.

That settlement depressed competing EHR vendor Practice Fusion’s valuation, according to media reports. Practice Fusion, which sold to Allscripts in January for $100 million, reportedly received suitor offers for as much as $250 million in 2017 – offers that were pulled shortly after news broke about its competitor’s noncompliance.

“There’s more pressure on providers to capture information in a more efficient way,” Sullivan says. “It’s a plus for technology companies, but providers who take on risk are exposed to more regulatory scrutiny.”

Claire Rychlewski is a healthcare reporter in Chicago for Mergermarket.

Let’s block ads! (Why?)



Source link


Web Design BangladeshBangladesh Online Market