Tuesday, September 19, 2017

The Endgame is a US with the Military Effectively in Charge


tomdispatch |  By Michael T. Klare, professor of peace and world security studies at Hampshire College and the author of 14 books including, most recently, The Race for What’s Left. He is currently completing work on a new book, All Hell Breaking Loose, on climate change and American national security. Originally published at TomDispatch

Deployed to the Houston area to assist in Hurricane Harvey relief efforts, U.S. military forces hadn’t even completed their assignments when they were hurriedly dispatched to Florida, Puerto Rico, and the U.S. Virgin Islands to face Irma, the fiercest hurricane ever recorded in the Atlantic Ocean. Florida Governor Rick Scott, who had sent members of the state National Guard to devastated Houston, anxiously recalled them while putting in place emergency measures for his own state. A small flotilla of naval vessels, originally sent to waters off Texas, was similarly redirected to the Caribbean, while specialized combat units drawn from as far afield as Colorado, Illinois, and Rhode Island were rushed to Puerto Rico and the Virgin Islands. Meanwhile, members of the California National Guard were being mobilized to fight wildfires raging across that state (as across much of the West) during its hottest summer on record.

Think of this as the new face of homeland security: containing the damage to America’s seacoasts, forests, and other vulnerable areas caused by extreme weather events made all the more frequent and destructive thanks to climate change. This is a “war” that won’t have a name — not yet, not in the Trump era, but it will be no less real for that. “The firepower of the federal government” was being trained on Harvey, as William Brock Long, administrator of the Federal Emergency Management Agency (FEMA), put it in a blunt expression of this warlike approach. But don’t expect any of the military officials involved in such efforts to identify climate change as the source of their new strategic orientation, not while Commander in Chief Donald Trump sits in the Oval Office refusing to acknowledge the reality of global warming or its role in heightening the intensity of major storms; not while he continues to stock his administration, top to bottom, with climate-change deniers.

Until Trump moved into the White House, however, senior military officers in the Pentagon were speaking openly of the threats posed to American security by climate change and how that phenomenon might alter the very nature of their work.  Though mum’s the word today, since the early years of this century military officials have regularly focused on and discussed such matters, issuing striking warnings about an impending increase in extreme weather events — hurricanes, incessant rainfalls, protracted heat waves, and droughts — and ways in which that would mean an ever-expanding domestic role for the military in both disaster response and planning for an extreme future.

That future, of course, is now.  Like other well-informed people, senior military officials are perfectly aware that it’s difficult to attribute any given storm, Harvey and Irma included, to human-caused climate change with 100% confidence. But they also know that hurricanes draw their fierce energy from the heat of tropical waters, and that global warming is raising the temperatures of those waters. It’s making storms like Harvey and Irma, when they do occur, ever more powerful and destructive.  “As greenhouse gas emissions increase, sea levels are rising, average global temperatures increasing, and severe weather patterns are accelerating,” the Department of Defense (DoD) bluntly explained in the Quadrennial Defense Review, a 2014 synopsis of defense policy. This, it added, “may increase the frequency, scale, and complexity of future missions, including defense support to civil authorities” — just the sort of crisis we’ve been witnessing over these last weeks.

As this statement suggests, any increase in climate-related extreme events striking U.S. territory will inevitably lead to a commensurate rise in American military support for civilian agencies, diverting key assets — troops and equipment — from elsewhere. While the Pentagon can certainly devote substantial capabilities to a small number of short-term emergencies, the multiplication and prolongation of such events, now clearly beginning to occur, will require a substantial commitment of forces, which, in time, will mean a major reorientation of U.S. security policy for the climate change era.  This may not be something the White House is prepared to do today, but it may soon find itself with little choice, especially since it seems so intent on crippling all civilian governmental efforts related to climate change.

The Fruits of 20th Century American WarSocialism


thesaker |  We are hard-coded to be credulous and uncritically accept all the demonization of Nazis and Soviets because we are Jews and White Russians. Careful here, I am NOT saying that the Nazis and Soviets were not evil – they definitely were – but what I am saying is that we, Jews and Russians, are far more willing to accept and endorse any version of history which makes the Nazis and Soviets some kind of exceptionally evil people and that, in contrast, we almost instinctively reject any notion that “our” side (in this case I mean *your* side, the American one since you, unlike me, consider yourselves American) was just as bad (if only because your side never murdered Jews and Russians). So let’s look at this “our/your side” for a few minutes.

By the time the USA entered WWII it had already committed the worse crime in human history, the poly-genocide of an entire continent, followed by the completely illegal and brutal annexation of the lands stolen from the Native Americans. Truly, Hitler would have been proud. But that is hardly all, the Anglo invaders then proceeded to wage another illegal and brutal war of annexation against Mexico from which they stole a huge chunk of land which includes modern Texas, California, Nevada, Utah, Arizona and New Mexico! Yes, all this land was illegally occupied and stolen by your side not once, but TWICE! And do I even need to mention the horrors of slavery to add to the “moral tally” of your side by the time the US entered the war? Right there I think that there is more than enough evidence that your side was morally worse than either the Nazis or the Soviets. The entire history of the USA is one of endless violence, plunder, hypocrisy, exploitation, imperialism, oppression and wars. Endless wars of aggression. None of them defensive by any stretch of the imagination. That is quite unique in human history. Can you think of a nastier, more bloodthirsty regime? I can’t.

Should I even mention the British “atrocities tally”, ranging from opium wars, to the invention of concentration camps, to the creation of Apartheid, the horrors of the occupation of Ireland, etc. etc. etc.?

I can just hear you say that yes, this was horrible, but that does not change the fact that in WWII the USA “saved Europe”. But is that really so?

To substantiate my position, I have put together a separate PDF file which lists 5 sources, 3 in English, 2 in Russian. You can download it here:

I have translated the key excerpts of the Russian sources and I am presenting them along with the key excerpts of the English sources. Please take a look at this PDF and, if you can, please read the full original articles I quote. I have stressed in bold red the key conclusions of these sources. You will notice that there are some variations in the figures, but the conclusions are, I think, undeniable. The historical record show that:
  1. The Soviet Union can be credited with the destruction of roughly 80% of the Nazi military machine. The US-UK correspondingly can be credited with no more than 20% of the Allied war effort.
  2. The scale and scope of the battles on the Eastern Front completely dwarf the biggest battles on the Western Front. Battles in the West involved Divisions and Brigades, in the East they involved Armies and Groups of Armies. That is at least one order of magnitude of difference.
  3. The USA only entered the war a year after Stalingrad and the Kursk battle when it was absolutely clear that the Nazis would lose the war.
The truth is that the Americans only entered the war when it was clear that the Nazis would be defeated and that their real motive was not the “liberation of oppressed Europe” but to prevent the Soviets from occupying all of Europe. The Americans never gave a damn about the mass murder of Jews or Russians, all they cared about was a massive land-grab (yet again).
[Sidebar: By the way, and lest you think that I claim that only Americans act this way, here is another set of interesting dates:
Nuclear bombing of Hiroshima and Nagasaki: August 6 and 9, 1945
Soviet Manchurian Strategic Offensive Operation: August 9–20, 1945
We can clearly see the same pattern here: the Soviets waited until it was absolutely certain that the USA had defeated the Japanese empire before striking it themselves. It is also worth noting that it took the Soviets only 10 days to defeat the entire Kwantung Army, the most prestigious Army of the Japanese Empire with over one million well-trained and well-equipped soldiers! That should tell you a little something about the kind of military machine the Soviet Union had developed in the course of the war against Nazi Germany (see here for a superb US study of this military operation)]
Did the Americans bring peace and prosperity to western Europe?

To western Europe, to some degree yes, and that is because was easy for them: they ended the war almost “fresh”, their (stolen) homeland did not suffer the horrors of war and so, yes, they could bring in peanut butter, cigarettes and other material goods. They also made sure that Western Europe would become an immense market for US goods and services and that European resources would be made available to the US Empire, especially against the Soviet Union. And how did they finance this “generosity”? By robbing the so-called Third World blind, that’s all. Is that something to be proud of? Did Lenin not warn as early as 1917 that “imperialism is the highest stage of capitalism”? The wealth of Western Europe was built by the abject poverty of the millions of Africans, Asians and Latin Americas.

But what about the future of Europe and the European people?

There a number of things upon which the Anglos and Stalin did agree to at the end of WWII: The four Ds: denazification, disarmament, demilitarisation, and democratisation of a united Germany and reparations to rebuild the USSR. Yes, Stalin wanted a united, neutral Germany. As soon as the war ended, however, the Anglos reneged on all of these promises: they created a heavily militarized West Germany, they immediately recruited thousands of top Nazi officials for their intelligence services, their rocket program and to subvert the Soviet Union. Worse, they immediately developed plans to attack the Soviet Union. Right at the end of the WWII, Anglo powers had at least THREE plans to wage war on the USSR: Operation Dropshot, Plan Totality and Operation Unthinkable

Monday, September 18, 2017

The Promise and Peril of Immersive Technologies


weforum |  The best place from which to draw inspiration for how immersive technologies may be regulated is the regulatory frameworks being put into effect for traditional digital technology today. In the European Union, the General Data Protection Regulation (GDPR) will come into force in 2018. Not only does the law necessitate unambiguous consent for data collection, it also compels companies to erase individual data on request, with the threat of a fine of up to 4% of their global annual turnover for breaches. Furthermore, enshrined in the bill is the notion of ‘data portability’, which allows consumers to take their data across platforms – an incentive for an innovative start-up to compete with the biggest players. We may see similar regulatory norms for immersive technologies develop as well.

Providing users with sovereignty of personal data
Analysis shows that the major VR companies already use cookies to store data, while also collecting information on location, browser and device type and IP address. Furthermore, communication with other users in VR environments is being stored and aggregated data is shared with third parties and used to customize products for marketing purposes.

Concern over these methods of personal data collection has led to the introduction of temporary solutions that provide a buffer between individuals and companies. For example, the Electronic Frontier Foundation’s ‘Privacy Badger’ is a browser extension that automatically blocks hidden third-party trackers and allows users to customize and control the amount of data they share with online content providers. A similar solution that returns control of personal data should be developed for immersive technologies. At present, only blunt instruments are available to individuals uncomfortable with data collection but keen to explore AR/VR: using ‘offline modes’ or using separate profiles for new devices.

Managing consumption
Short-term measures also exist to address overuse in the form of stopping mechanisms. Pop-up usage warnings once healthy limits are approached or exceeded are reportedly supported by 71% of young people in the UK. Services like unGlue allow parents to place filters on content types that their children are exposed to, as well as time limits on usage across apps.

All of these could be transferred to immersive technologies, and are complementary fixes to actual regulation, such as South Korea’s Shutdown Law. This prevents children under the age of 16 from playing computer games between midnight and 6am. The policy is enforceable because it ties personal details – including date of birth – to a citizen’s resident registration number, which is required to create accounts for online services. These solutions are not infallible: one could easily imagine an enterprising child might ‘borrow’ an adult’s device after-hours to find a workaround to the restrictions. Further study is certainly needed, but we believe that long-term solutions may lie in better design.
Rethinking success metrics for digital technology
As businesses develop applications using immersive technologies, they should transition from using metrics that measure just the amount of user engagement to metrics that also take into account user satisfaction, fulfilment and enhancement of well-being. Alternative metrics could include a net promoter score for software, which would indicate how strongly users – or perhaps even regulators – recommend the service to their friends based on their level of fulfilment or satisfaction with a service.

The real challenge, however, is to find measures that align with business policy and user objectives. As Tristan Harris, Founder of Time Well Spent argues: “We have to come face-to-face with the current misalignment so we can start to generate solutions.” There are instances where improvements to user experience go hand-in-hand with business opportunities. Subscription-based services are one such example: YouTube Red will eliminate advertisements for paying users, as does Spotify Premium. These are examples where users can pay to enjoy advertising-free experiences and which do not come at the cost to the content developers since they will receive revenue in the form of paid subscriptions.

More work remains if immersive technologies are to enable happier, more fulfilling interactions with content and media. This will largely depend on designing technology that puts the user at the centre of its value proposition.

This is part of a series of articles related to the disruptive effects of several technologies (virtual/augmented reality, artificial intelligence and blockchain) on the creative economy.


Virtual Reality Health Risks...,


medium |  Two decades ago, our research group made international headlines when we published research showing that virtual reality systems could damage people’s health.

Our demonstration of side-effects was not unique — many research groups were showing that it could cause health problems. The reason that our work was newsworthy was because we showed that there were fundamental problems that needed to be tackled when designing virtual reality systems — and these problems needed engineering solutions that were tailored for the human user.

In other words, it was not enough to keep producing ever faster computers and higher definition displays — a fundamental change in the way systems were designed was required.

So why do virtual reality systems need a new approach? The answer to this question lies in the very definition of how virtual reality differs from how we traditionally use a computer.

Natural human behaviour is based on responses elicited by information detected by a person’s sensory systems. For example, rays of light bouncing off a shiny red apple can indicate that there’s a good source of food hanging on a tree.

A person can then use the information to guide their hand movements and pick the apple from the tree. This use of ‘perception’ to guide ‘motor’ actions defines a feedback loop that underpins all of human behaviour. The goal of virtual reality systems is to mimic the information that humans normally use to guide their actions, so that humans can interact with computer generated objects in a natural way.

The problems come when the normal relationship between the perceptual information and the corresponding action is disrupted. One way of thinking about such disruption is that a mismatch between perception and action causes ‘surprise’. It turns out that surprise is really important for human learning and the human brain appears to be engineered to minimise surprise.

This means that the challenge for the designers of virtual reality is that they must create systems that minimise the surprise experienced by the user when using computer generated information to control their actions.

Of course, one of the advantages of virtual reality is that the computer can create new and wonderful worlds. For example, a completely novel fruit — perhaps an elppa — could be shown hanging from a virtual tree. The elppa might have a completely different texture and appearance to any other previously encountered fruit — but it’s important that the information used to specify the location and size of the elppa allows the virtual reality user to guide their hand to the virtual object in a normal way.

If there is a mismatch between the visual information and the hand movements then ‘surprise’ will result, and the human brain will need to adapt if future interactions between vision and action are to maintain their accuracy. The issue is that the process of adaptation may cause difficulties — and these difficulties might be particularly problematic for children as their brains are not fully developed. 

This issue affects all forms of information presented within a virtual world (so hearing and touch as well as vision), and all of the different motor systems (so postural control as well as arm movement systems). One good example of the problems that can arise can be seen through the way our eyes react to movement.

In 1993, we showed that virtual reality systems had a fundamental design flaw when they attempted to show three dimensional visual information. This is because the systems produce a mismatch between where the eyes need to focus and where the eyes need to point. In everyday life, if we change our focus from something close to something far away our eyes will need to change focus and alter where they are pointing.

The change in focus is necessary to prevent blur and the change in eye direction is necessary to stop double images. In reality, the changes in focus and direction are physically linked (a change in fixation distance causes change in the images and where the images fall at the back of the eyes).

Sunday, September 17, 2017

Artificial Intelligence is Lesbian


thenewyorker |  “The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” Michal Kosinski, an organizational psychologist at the Stanford Graduate School of Business, told the Guardian earlier this week. The photo of Kosinski accompanying the interview showed the face of a man beleaguered. Several days earlier, Kosinski and a colleague, Yilun Wang, had reported the results of a study, to be published in the Journal of Personality and Social Psychology, suggesting that facial-recognition software could correctly identify an individual’s sexuality with uncanny accuracy. The researchers culled tens of thousands of photos from an online-dating site, then used an off-the-shelf computer model to extract users’ facial characteristics—both transient ones, like eye makeup and hair color, and more fixed ones, like jaw shape. Then they fed the data into their own model, which classified users by their apparent sexuality. When shown two photos, one of a gay man and one of a straight man, Kosinski and Wang’s model could distinguish between them eighty-one per cent of the time; for women, its accuracy dropped slightly, to seventy-one per cent. Human viewers fared substantially worse. They correctly picked the gay man sixty-one per cent of the time and the gay woman fifty-four per cent of the time. “Gaydar,” it appeared, was little better than a random guess.

The study immediately drew fire from two leading L.G.B.T.Q. groups, the Human Rights Campaign and GLAAD, for “wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation.” They offered a list of complaints, which the researchers rebutted point by point. Yes, the study was in fact peer-reviewed. No, contrary to criticism, the study did not assume that there was no difference between a person’s sexual orientation and his or her sexual identity; some people might indeed identify as straight but act on same-sex attraction. “We assumed that there was a correlation . . . in that people who said they were looking for partners of the same gender were homosexual,” Kosinski and Wang wrote. True, the study consisted entirely of white faces, but only because the dating site had served up too few faces of color to provide for meaningful analysis. And that didn’t diminish the point they were making—that existing, easily obtainable technology could effectively out a sizable portion of society. To the extent that Kosinski and Wang had an agenda, it appeared to be on the side of their critics. As they wrote in the paper’s abstract, “Given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”

Saturday, September 16, 2017

Kevin Shipp: The Deep State and the Shadow Government


activistpost |  “The shadow government controls the deep state and manipulates our elected government behind the scenes,” Shipp warned in a recent talk at a Geoengineeringwatch.org conference.

Shipp had a series of slides explaining how the deep state and shadow government functions as well as the horrific crimes they are committing against U.S. citizens.

Some of the revelations the former CIA anti-terrorism counter intelligence officer revealed included that “Google Earth was set up through the National Geospatial Intelligence Agency and InQtel.” Indeed he is correct, the CIA and NGA owned the company Google acquired, Keyhole Inc., paying an undisclosed sum for the company to turn its tech into what we now know as Google Earth. Another curious investor in Keyhole Inc. was none other than the venture capital firm In-Q-Tel run by the CIA according to a press release at the time.

“The top of the shadow government is the National Security Agency and the Central Intelligence Agency,” Shipp said.

Shipp expressed that the CIA was created through the Council on Foreign relations with no congressional approval, and historically the CFR is also tied into the mainstream media (MSM.) He elaborated that the CIA was the “central node” of the shadow government and controlled all of other 16 intelligence agencies despite the existence of the DNI. The agency also controls defense and intelligence contractors, can manipulate the president and political decisions, has the power to start wars, torture, initiate coups, and commit false flag attacks he said.

As Shipp stated, the CIA was created through executive order by then President Harry Truman by the signing of the National Security Act of 1947.

According to Shipp, the deep state is comprised of the military industrial complex, intelligence contractors, defense contractors, MIC lobbyist, Wall St (offshore accounts), Federal Reserve, IMF/World Bank, Treasury, Foreign lobbyists, and Central Banks.

In the shocking, explosive presentation, Shipp went on to express that there are “over 10,000 secret sites in the U.S.” that formed after 9/11. There are “1,291 secret government agencies, 1,931 large private corporations and over 4,800,000 Americans that he knows of who have a secrecy clearance, and 854,000 who have Top Secret clearance, explaining they signed their lives away bound by an agreement.

He also detailed how Congress is owned by the Military Industrial Complex through the Congressional Armed Services Committee (48 senior members of Congress) giving those members money in return for a vote on the spending bill for the military and intelligence budget.

He even touched on what he called the “secret intelligence industrial complex,” which he called the center of the shadow government including the CIA, NSA, NRO, and NGA.

Shipp further stated that around the “secret intelligence industrial complex” you have the big five conglomerate of intelligence contractors – Leidos Holdings, CSRA, CACI, SAIC, and Booz Allen Hamilton. He noted that the work they do is “top secret and unreported.”

Alfred W. McCoy: Pentagon Wonder Weapons For World Dominion


tomdispatch |  Ever since the Pentagon with its 17 miles of corridors was completed in 1943, that massive bureaucratic maze has presided over a creative fusion of science and industry that President Dwight Eisenhower would dub “the military-industrial complex” in his farewell address to the nation in 1961. “We can no longer risk emergency improvisation of national defense,” he told the American people. “We have been compelled to create a permanent armaments industry of vast proportions” sustained by a “technological revolution” that is “complex and costly.” As part of his own contribution to that complex, Eisenhower had overseen the creation of both the National Aeronautics and Space Administration, or NASA, and a “high-risk, high-gain” research unit called the Advanced Research Projects Agency, or ARPA, that later added the word “Defense” to its name and became DARPA.
 
For 70 years, this close alliance between the Pentagon and major defense contractors has produced an unbroken succession of “wonder weapons” that at least theoretically gave it a critical edge in all major military domains. Even when defeated or fought to a draw, as in Vietnam, Iraq, and Afghanistan, the Pentagon’s research matrix has demonstrated a recurring resilience that could turn disaster into further technological advance.

The Vietnam War, for example, was a thoroughgoing tactical failure, yet it would also prove a technological triumph for the military-industrial complex. Although most Americans remember only the Army’s soul-destroying ground combat in the villages of South Vietnam, the Air Force fought the biggest air war in military history there and, while it too failed dismally and destructively, it turned out to be a crucial testing ground for a revolution in robotic weaponry.

To stop truck convoys that the North Vietnamese were sending through southern Laos into South Vietnam, the Pentagon’s techno-wizards combined a network of sensors, computers, and aircraft in a coordinated electronic bombing campaign that, from 1968 to 1973, dropped more than a million tons of munitions — equal to the total tonnage for the whole Korean War — in that limited area. At a cost of $800 million a year, Operation Igloo White laced that narrow mountain corridor with 20,000 acoustic, seismic, and thermal sensors that sent signals to four EC-121 communications aircraft circling ceaselessly overhead.

At a U.S. air base just across the Mekong River in Thailand, Task Force Alpha deployed two powerful IBM 360/65 mainframe computers, equipped with history’s first visual display monitors, to translate all those sensor signals into “an illuminated line of light” and so launch jet fighters over the Ho Chi Minh Trail where computers discharged laser-guided bombs automatically. Bristling with antennae and filled with the latest computers, its massive concrete bunker seemed, at the time, a futuristic marvel to a visiting Pentagon official who spoke rapturously about “being swept up in the beauty and majesty of the Task Force Alpha temple.”

However, after more than 100,000 North Vietnamese troops with tanks, trucks, and artillery somehow moved through that sensor field undetected for a massive offensive in 1972, the Air Force had to admit that its $6 billion “electronic battlefield” was an unqualified failure. Yet that same bombing campaign would prove to be the first crude step toward a future electronic battlefield for unmanned robotic warfare.

In the pressure cooker of history’s largest air war, the Air Force also transformed an old weapon, the “Firebee” target drone, into a new technology that would rise to significance three decades later. By 1972, the Air Force could send an “SC/TV” drone, equipped with a camera in its nose, up to 2,400 miles across communist China or North Vietnam while controlling it via a low-resolution television image. The Air Force also made aviation history by test firing the first missile from one of those drones.

The air war in Vietnam was also an impetus for the development of the Pentagon’s global telecommunications satellite system, another important first. After the Initial Defense Satellite Communications System launched seven orbital satellites in 1966, ground terminals in Vietnam started transmitting high-resolution aerial surveillance photos to Washington — something NASA called a “revolutionary development.” Those images proved so useful that the Pentagon quickly launched an additional 21 satellites and soon had the first system that could communicate from anywhere on the globe. Today, according to an Air Force website, the third phase of that system provides secure command, control, and communications for “the Army’s ground mobile forces, the Air Force’s airborne terminals, Navy ships at sea, the White House Communications Agency, the State Department, and special users” like the CIA and NSA.

At great cost, the Vietnam War marked a watershed in Washington’s global information architecture. Turning defeat into innovation, the Air Force had developed the key components — satellite communications, remote sensing, computer-triggered bombing, and unmanned aircraft — that would merge 40 years later into a new system of robotic warfare.

Friday, September 15, 2017

The Snowflakes Almost Stroked Out on Camera....,


Machine Learning and Data Driven Medical Diagnostics


labiotech |  Sophia Artificial Intelligence (AI) is already used worldwide to analyze next-generation sequencing (NGS) data of patients and make a diagnosis, independently of the indication. “We support over 350 hospitals in 53 countries,” CEO Jurgi Camblong told me.

With the new funds, Sophia Genetics is planning on increasing the number of centers using the technology. According to Camblong, this step is also key for the performance of the diagnostics algorythm, since the more data is available to the platform, the better results it can achieve.”By 2020, with the network, members and data we have, we will move into an era of real-time epidemiology,” assures Camblong.

Sophia’s growing network of hospitals is also the key to its ultimate goal: democratizing data-driven medicine. Until now, access to NGS equipment and analysis expertise was not affordable for all hospitals, especially those in underdeveloped regions of the world. Sophia Genetics is breaking this barrier by giving access to the network and its accumulated knowledge to small hospitals in Africa, Eastern Europe and Latin America without the resources to take on diagnostics themselves.

One of the areas Sophia AI can have a bigger impact is cancer, which currently makes up about a third of the 8,000 new patient cases registered in the platform each month. With the resources the cash injection will bring, the company wants to take on the project of implementing imaging data as well as genomic data to diagnose cancer and recommend the best treatment for each patient.  Fist tap Big Don.

Vikram Pandit Says 1.8 Million Bank Employees Gotta Go Gotta Go Gotta Go...,


bloomberg |  Vikram Pandit, who ran Citigroup Inc. during the financial crisis, said developments in technology could see some 30 percent of banking jobs disappearing in the next five years.

Artificial intelligence and robotics reduce the need for staff in roles such as back-office functions, Pandit, 60, said Wednesday in an interview with Bloomberg Television’s Haslinda Amin in Singapore. He’s now chief executive officer of Orogen Group, an investment firm that he co-founded last year.

“Everything that happens with artificial intelligence, robotics and natural language -- all of that is going to make processes easier,” said Pandit, who was Citigroup’s chief executive officer from 2007 to 2012. “It’s going to change the back office.”

Wall Street’s biggest firms are using technologies including machine learning and cloud computing to automate their operations, forcing many employees to adapt or find new positions. Bank of America Corp.’s Chief Operating Officer Tom Montag said in June the firm will keep cutting costs by finding more ways technology can replace people.

While Pandit’s forecast for job losses is in step with one made by Citigroup last year, his timeline is more aggressive. In a March 2016 report, the lender estimated a 30 percent reduction between 2015 and 2025, mainly due to automation in retail banking. That would see full-time jobs drop by 770,000 in the U.S. and by about 1 million in Europe, Citigroup said.

Thursday, September 14, 2017

Who Controls Antarctica and Keeps It Strictly Off-Limits to You?


wikipedia |  Seven sovereign states had made eight territorial claims to land in Antarctica south of the 60° S parallel before 1961. These claims have been recognized only between the countries making claims in the area. All claim areas are sectors, with the exception of Peter I Island. None of these claims have an indigenous population. The South Orkney Islands fall within the territory claimed by Argentina and the United Kingdom, and the South Shetland Islands fall within the areas claimed by Argentina, Chile, and the United Kingdom. The UK, France, Australia, New Zealand and Norway all recognize each other's claims.[30] None of these claims overlap. Prior to 1962, British Antarctic Territory was a dependency of the Falkland Islands and also included South Georgia and the South Sandwich Islands. The Antarctic areas became a separate overseas territory following the ratification of the Antarctic Treaty. South Georgia and the South Sandwich Islands remained a dependency of the Falkland Islands until 1985 when they too became a separate overseas territory.

The Antarctic Treaty and related agreements regulate international relations with respect to Antarctica, Earth's only continent without a native human population. The treaty has now been signed by 48 countries, including the United Kingdom, the United States, and the now-defunct Soviet Union. The treaty set aside Antarctica as a scientific preserve, established freedom of scientific investigation and banned military activity on that continent. This was the first arms control agreement established during the Cold War. The Soviet Union and the United States both filed reservations against the restriction on new claims,[35] and the United States and Russia assert their right to make claims in the future if they so choose. Brazil maintains the Comandante Ferraz (the Brazilian Antarctic Base) and has proposed a theory to delimiting territories using meridians, which would give it and other countries a claim. In general, territorial claims below the 60° S parallel have only been recognised among those countries making claims in the area. However, although claims are often indicated on maps of Antarctica, this does not signify de jure recognition.

All claim areas, except Peter I Island, are sectors, the borders of which are defined by degrees of longitude. In terms of latitude, the northern border of all sectors is the 60° S parallel which does not cut through any piece of land, continent or island, and is also the northern limit of the Antarctic Treaty. The southern border of all sectors collapses in one point, the South Pole. Only the Norwegian sector is an exception: the original claim of 1930 did not specify a northern or a southern limit, so that its territory is only defined by eastern and western limits.[note 2]
The Antarctic Treaty states that contracting to the treaty:
  • is not a renunciation of any previous territorial claim.
  • does not affect the basis of claims made as a result of activities of the signatory nation within Antarctica.
  • does not affect the rights of a State under customary international law to recognise (or refuse to recognise) any other territorial claim.
What the treaty does affect are new claims:
  • No activities occurring after 1961 can be the basis of a territorial claim.
  • No new claim can be made.
  • No claim can be enlarged.
wikipedia |  Positioned asymmetrically around the South Pole and largely south of the Antarctic Circle, Antarctica is the southernmost continent and is surrounded by the Southern Ocean; alternatively, it may be considered to be surrounded by the southern Pacific, Atlantic, and Indian Oceans, or by the southern waters of the World Ocean. There are a number of rivers and lakes in Antarctica, the longest river being the Onyx. The largest lake, Vostok, is one of the largest sub-glacial lakes in the world. Antarctica covers more than 14 million km2 (5,400,000 sq mi),[1] making it the fifth-largest continent, about 1.3 times as large as Europe. 

About 98% of Antarctica is covered by the Antarctic ice sheet, a sheet of ice averaging at least 1.6 km (1.0 mi) thick. The continent has about 90% of the world's ice (and thereby about 70% of the world's fresh water). If all of this ice were melted, sea levels would rise about 60 m (200 ft).[43] In most of the interior of the continent, precipitation is very low, down to 20 mm (0.8 in) per year; in a few "blue ice" areas precipitation is lower than mass loss by sublimation and so the local mass balance is negative. In the dry valleys, the same effect occurs over a rock base, leading to a desiccated landscape.


People Get Their Beliefs Reinforced Just Enough To Keep Fooling Themselves


alt-market |  That maybe, just maybe, the conservative right is being tenderized in preparation for radicalization, just as much as the left has been radicalized. For the more extreme the social divide, the more likely chaos and crisis will erupt, and the globalists never let a good crisis go to waste. Zealots, regardless of their claimed moral authority, are almost always wrong in history. Conservatives cannot afford to be wrong in this era. We cannot afford zealotry.  We cannot afford biases and mistakes; the future of individual liberty depends on our ability to remain objective, vigilant and steadfast. Without self examination, we will lose everything.

Years ago in 2012, I published a thorough examination of disinformation tactics used by globalist institutions as well as government and political outfits to manipulate the public and undermine legitimate analysts working to expose particular truths of our social and economic conditions.

If you have not read this article, titled Disinformation: How It Works, I highly recommend you do so now. It will act as a solid foundation for what I am about to discuss in this article. Without a basic understanding of how lies are utilized, you will be in no position to grasp the complexities of disinformation trends being implemented today.

Much of what I am about to discuss will probably not become apparent for much of the mainstream and portions of the liberty movement for many years to come. Sadly, the biggest lies are often the hardest to see until time and distance are achieved.

If you want to be able to predict geopolitical and economic trends with any accuracy, you must first accept a couple of hard realities. First and foremost, the majority of cultural shifts and fiscal developments within our system are a product of social engineering by an organized collective of power elites. Second, you must understand that this collective is driven by the ideology of globalism — the pursuit of total centralization of financial and political control into the hands of a select few deemed as "superior" concertmasters or "maestros."

As globalist insider, CFR member and mentor to Bill Clinton, Carroll Quigley, openly admitted in his book Tragedy And Hope:
"The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent private meetings and conferences. The apex of the system was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world’s central banks which were themselves private corporations. Each central bank ... sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."
The philosophical basis for the globalist ideology is most clearly summarized in the principles of something called "Fabian Socialism," a system founded in 1884 which promotes the subversive and deliberate manipulation of the masses towards total centralization, collectivism and population control through eugenics. Fabian Socialists prefer to carry out their strategies over a span of decades, turning a population against itself slowly, rather than trying to force changes to a system immediately and outright.  Their symbol is a coat of arms depicting a wolf in sheep's clothing, or in some cases a turtle (slow and steady wins the race?) with the words "When I strike I strike hard."
Again, it is important to acknowledge that these people are NOT unified by loyalty to any one nation, culture, political party, mainstream religion or ethnic background.

Wednesday, September 13, 2017

Can The Anglo-Zionist Empire Continue to Enforce Its "Truth"?


medialens |  The goal of a mass media propaganda campaign is to create the impression that 'everybody knows' that Saddam is a 'threat', Gaddafi is 'about to commit mass murder', Assad 'has to go', Corbyn is 'destroying the Labour party', and so on. The picture of the world presented must be clear-cut. The public must be made to feel certain that the 'good guys' are basically benevolent, and the 'bad guys' are absolutely appalling and must be removed.

This is achieved by relentless repetition of the theme over days, weeks, months and even years. Numerous individuals and organisations are used to give the impression of an informed consensus – there is no doubt! Once this 'truth' has been established, anyone contradicting or even questioning it is typically portrayed as a shameful 'apologist' in order to deter further dissent and enforce conformity.

A key to countering this propaganda is to ask some simple questions: Why are US-UK governments and corporate media much more concerned about suffering in Venezuela than the far worse horrors afflicting war-torn, famine-stricken Yemen? Why do UK MPs rail against Maduro while rejecting a parliamentary motion to suspend UK arms supplies to their Saudi Arabian allies attacking Yemen? Why is the imperfect state of democracy in Venezuela a source of far greater outrage than outright tyranny in Saudi Arabia? The answers could hardly be more obvious.

Elite Establishment Has Lost Control of the Information Environment


tandfonline |  In 1993, before WiFi, indeed before more than a small fraction of people enjoyed broadband Internet, John J. Arquilla and David F. Ronfeldt of the Rand Corporation began to develop a thesis on “Cyberwar and Netwar” (Arquilla and Ronfeldt 1995 Arquilla, J. J., and D. F. Ronfeldt. 1995. “Cyberwar and Netwar: New Modes, Old Concepts, of Conflict.” Rand Review, Fall. https://www.rand.org/pubs/periodicals/rand-review/issues/RRR-fall95-cyber/cyberwar.html archived at https://perma.cc/NNT3-C6U3. (Excerpted from “Cyberwar Is Coming,” by Arquilla and Ronfeldt.” Comparative Strategy 12: 141165. 1993. doi:10.1080/01495939308402915 archived at https://perma.cc/8RQY-S3SW.)[Taylor & Francis Online][Google Scholar]). I found it of little interest at the time. It seemed typical of Rand’s role as a sometime management consultant to the military-industrial complex. For example, Arquilla and Ronfeldt wrote that “[c]yberwar refers to conducting military operations according to information-related principles. It means disrupting or destroying information and communications systems. It means trying to know everything about an adversary while keeping the adversary from knowing much about oneself.” A sort of Sun Tzu for the networked era.

The authors’ coining of the notion of “netwar” as distinct from “cyberwar” was even more explicitly grandiose. They went beyond bromides about inter-military conflict, describing impacts on citizenries at large:
Netwar refers to information-related conflict at a grand level between nations or societies. It means trying to disrupt or damage what a target population knows or thinks it knows about itself and the world around it. A netwar may focus on public or elite opinion, or both. It may involve diplomacy, propaganda and psychological campaigns, political and cultural subversion, deception of or interference with local media, infiltration of computer networks and databases, and efforts to promote dissident or opposition movements across computer networks. (Arquilla and Ronfeldt 1995 Arquilla, J. J., and D. F. Ronfeldt. 1995. “Cyberwar and Netwar: New Modes, Old Concepts, of Conflict.” Rand Review, Fall. https://www.rand.org/pubs/periodicals/rand-review/issues/RRR-fall95-cyber/cyberwar.html archived at https://perma.cc/NNT3-C6U3. (Excerpted from “Cyberwar Is Coming,” by Arquilla and Ronfeldt.” Comparative Strategy 12: 141165. 1993. doi:10.1080/01495939308402915 archived at https://perma.cc/8RQY-S3SW.)[Taylor & Francis Online][Google Scholar])
While “netwar” never caught on as a name, I was, in retrospect, too quick to dismiss it. Today it is hard to look at Arquilla and Ronfeldt’s crisp paragraph of more than 20 years ago without appreciating its deep prescience.

Our digital environment, once marked by the absence of sustained state involvement and exploitation, particularly through militaries, is now suffused with it. We will need new strategies to cope with this kind of intrusion, not only in its most obvious manifestations – such as shutting down connectivity or compromising private email – but also in its more subtle ones, such as subverting social media for propaganda purposes.

Many of us thinking about the Internet in the late 1990s concerned ourselves with how the network’s unusually open and generative architecture empowered individuals in ways that caught traditional states – and, to the extent they concerned themselves with it at all, their militaries – flat-footed. As befitted a technology that initially grew through the work and participation of hobbyists, amateurs, and loosely confederated computer science researchers, and later through commercial development, the Internet’s features and limits were defined without much reference to what might advantage or disadvantage the interests of a particular government.

To be sure, conflicts brewed over such things as the unauthorized distribution of copyrighted material, presaging counter-reactions by incumbents. Scholars such as Harvard Law School professor Lawrence Lessig (2006 Lessig, L. 2006. Code Version 2.0. New York: Basic Books. http://codev2.cc/ archived at https://perma.cc/2NCX-UGBE. [Google Scholar]) mapped out how the code that enabled freedom (to some; anarchy to others) could readily be reworked, under pressure of regulators if necessary, to curtail it. Moreover, the interests of the burgeoning commercial marketplace and the regulators could neatly intersect: The technologies capable of knowing someone well enough to anticipate the desire for a quick dinner, and to find the nearest pizza parlor, could – and have – become the technologies of state surveillance.

That is why divisions among those who study the digital environment – between so-called techno-utopians and cyber-skeptics – are not so vast. The fact was, and is, that our information technologies enable some freedoms and diminish others, and more important, are so protean as to be able to rearrange or even invert those affordances remarkably quickly.

Fascist Traitors In House and Senate Tryna Criminalize Anti-Israel Speech


WaPo  |  When government takes sides on a particular boycott and criminalizes those who engage in a boycott, it crosses a constitutional line.

Cardin and other supporters argue that the Israel Anti-Boycott Act targets only commercial activity. In fact, the bill threatens severe penalties against any business or individual who does not purchase goods from Israeli companies operating in the occupied Palestinian territories and who makes it clear — say by posting on Twitter or Facebook — that their reason for doing so is to support a U.N.- or E.U.-called boycott. That kind of penalty does not target commercial trade; it targets free speech and political beliefs. Indeed, the bill would prohibit even the act of giving information to a U.N. body about boycott activity directed at Israel.

The bill’s chilling effect would be dramatic — and that is no doubt its very purpose. But individuals, not the government, should have the right to decide whether to support boycotts against practices they oppose. Neither individuals nor businesses should have to fear million-dollar penalties, years in prison and felony convictions for expressing their opinions through collective action. As an organization, we take no sides on the Israeli-Palestinian conflict. But regardless of the politics, we have and always will take a strong stand when government threatens our freedoms of speech and association. The First Amendment demands no less. 

WaPo  |   The Israel Anti-Boycott Act would extend the 1977 law to international organizations, such as the United Nations or even the European Union, that might parallel the Arab League’s original “blacklist” of companies doing business with Israel, which was the heart of its boycott.

It couldn’t come at a better time. Already, the U.N. Human Rights Council has passed a resolution last year requesting its high commissioner for human rights to create a database of companies that operate in or have business relationships in the West Bank beyond Israel’s 1949 Armistice Lines, which includes all of Jerusalem, Israel’s capital.

If the high commissioner implements this resolution, as he appears determined to do, it will create a new “blacklist” that could subject American individuals and companies to discrimination, yet again, for simply doing business with Israel.

Moreover, the European Union has instituted a mandatory labeling requirement for agricultural products made in the West Bank and has restricted its substantial research and development funds to Israeli universities and companies to only those with no contacts with territories east of the Armistice Line. None of the many U.N. member states that are serial human rights violators are accorded similar treatment. Not Iran. Not Syria. Not North Korea. Only Israel.

These kinds of actions do not create the right atmosphere to prompt resumption of peace talks between Israel and the Palestinians that the Trump administration is seeking to jump-start.