-

Month: June 2015

#OutofOffice: Wie ich ortsunabhängig ohne Büro auf der ganzen Welt arbeite!

Posted on Updated on

unten …. werde ich sagen, das passt nicht für alle Länder!

Ich selbst bin Online Unternehmer, oder auch Blogpreneur, und arbeite als digitaler Nomade von der ganzen Welt aus. Alles fing mit meinem Abenteuer-Reise-Blog Off The Path an, der mir die Möglichkeiten im Internet überhaupt erst gezeigt hat. Heute führe ich verschiedene Online Businesses – unter anderem bin ich Co-Founder Deutschlands größter Blogger-Schule, habe eine App und ein Plugin entwickelt, und führe mit Support Camp ein kleines Team, das WordPress Probleme schnell und einfach löst. Und all das ohne festes Büro. Ich arbeite und lebe da, wo es mir gerade gefällt!

Mein Laptop und ich sind jetzt endlich #OutofOffice!

Momentan ist das tatsächlich Berlin, was bis jetzt auch meine kleine Home Base war, zu der ich immer wieder zurückgekommen bin. Ich habe hier auch meine eigene Wohnung mitten in Kreuzberg, und bis letztes Jahr sogar einen festen Büroplatz in einem Co-Working Office am Moritzplatz. Doch um ehrlich zu sein, halte ich es hier nur im Sommer aus. Als Halbspanier muss ich es nun mal warm haben!

Den Büroplatz habe ich aufgegeben, weil ich einfach zu viel unterwegs bin, und ihn nie richtig genutzt habe. Irgendwann war mir klar, dass ich das Geld für den Büroplatz auch einfach sparen kann. Denn spätestens wenn der Herbst eintritt, verdrücke ich mich sowieso in wärmere Länder, wie zum Beispiel nach Thailand oder Indonesien. Meine Wohnung habe ich bis jetzt während meiner Reisen immer untervermietet, aber diesmal gehen meine Freundin Line und ich noch einen Schritt weiter: wir werden die Wohnung ganz aufgeben.

Ich werde also keine wirkliche Home Base mehr haben. Kein Büro, keine eigene Wohnung, kein wirkliches Zuhause. Ein wenig beängstigend ist dieser Gedanke schon. Aber es hilft, sich zunächst einen Plan zu machen, wie es weiter geht. Wir wollen den Winter nun zum Beispiel in Südafrika verbringen und unser Zelt für ein paar Monate in Kapstadt aufschlagen. Danach werden wir wahrscheinlich jeden Monat in eine andere Stadt ziehen, und somit voll und ganz ins digitale Nomadentum abtauchen!

Meinen Laptop klappe ich jetzt überall auf

Was das genau bedeutet? Wir werden ab Oktober zwar keine eigene Wohnung mehr in Berlin haben, dafür aber unendlich viele Apartments auf der ganzen Welt dank AirBnB und Co. Unseren Laptop klappen wir nicht mehr an der immer selben Stelle auf, sondern in Co-Workings Spaces und coolen Cafés überall auf der Welt. Mal im digitalen Nomaden Hot-Spot Chiang Mai, mal im Hubud auf Bali, mal im Hipster-Café in Sydney, oder auf dem Berg in Neuseeland, oder eben in Kapstadt. Da, wo es uns am besten gefällt.

Und das Geile ist, wir sind dabei nicht alleine. Auf der ganzen Welt kommen digitale Nomaden zusammen, arbeiten neben oder miteinander, nutzen die Community und tauschen sich aus. Es ist also in keinem Fall so, dass wir ganz alleine oder gar total verrückt sind. Es ist mittlerweile eine richtige Bewegung, und es gibt immer mehr Menschen, die sich für ein ortsunabhängiges Leben entscheiden, und das Reisen mit ihrer Arbeit und ihrem Leben verbinden.

Erst vor kurzem fand hier in Berlin die dritte DNX statt, bei der über 450 angehende sowie bereits etablierte deutsche digitale Nomaden zusammengekommen sind, sich ausgetauscht und gemeinsam in Workshops gelernt haben. Und ich denke, jeder, der einmal in das Leben als digitaler Nomade geschnuppert hat, will nicht mehr zurück in den normalen 9-to-5-Job und immer wieder den selben Weg zur Arbeit fahren, um sich wieder auf den selben Stuhl an den selben Tisch zu setzen.

#OutofOffice erfordert so einiges an Disziplin

Wie man so schön sagt, ist natürlich nicht alles Gold, das glänzt. Immer unterwegs zu sein und nebenbei zu arbeiten erfordert sehr viel Disziplin. Wir arbeiten da, wo andere Urlaub machen und eventuell faul neben uns liegen. Der kleine, aber feine Unterschied ist allerdings, dass wir nach einer oder zwei Wochen nicht wieder zurück in unseren Alltag müssen, sondern unseren Arbeitsort, unsere Zeiten, ja unser ganzes Leben frei bestimmen können!

Und diese Art des neuen Arbeitens bedeutet für mich die pure Freiheit. Ich bin total unabhängig, kann entscheiden, wie, wann und wo ich arbeite. Ich muss nicht erst um 9 anfangen, ich kann auch schon um 6 Uhr morgens anfangen und in die Tasten hauen. Ich kann mir um 10 eine Pause gönnen und mir in meinem Lieblingscafé nebenan den besten Flat White holen, ich kann mir den ganzen Nachmittag frei nehmen, und dafür spät abends weiter arbeiten. Der Dienstag kann zu meinem Sonntag werden, und umgekehrt.

Raus aus 9-to-5 und ab in die pure Freiheit!

Ich hasse Montage nicht, so wie es die meisten Menschen tun. Montag ist für mich wie jeder andere Tag auch: ein neuer Tag, an dem ich mich selbst verwirklichen und meine Träume leben kann. Ich muss mir keinen Urlaub nehmen, wenn ich mal eben einen neuen Reisepass beim Amt beantragen muss, ich kann es einfach machen. Ich muss nicht abends um 7 Uhr am Paul-Lincke-Ufer in Kreuzberg joggen, wenn es alle anderen tun. Ich kann es auch einfach mittags machen, wenn der Weg frei von anderen ist, und ich tatsächlich nach vorne laufen kann, anstatt ein Zick-Zack-Rennen zu veranstalten.

Die Möglichkeit, im Café oder draußen auf der Wiese in der Sonne zu arbeiten, und nicht zwischen den immer selben Wänden zu sitzen, ist der Hammer. Ich bin so viel kreativer, viel motivierter und viel produktiver. Ich starte meinen Tag voller Tatkraft und voller Ideen. Ich kann mich wirklich frei entfalten und schaffe so natürlich viel mehr.

Das Einzige, was ich, Line und alle anderen digitalen Nomaden neben unserem Laptop für unseren Lebensstil brauchen, ist stabiles und schnelles Internet. Und zum Glück wird das Internet weltweit immer besser. Wir hatten im australischen Outback sogar besseren Empfang als in der Berliner U-Bahn!

Auch wenn wir manchmal etwas verzweifelt auf der Suche nach Cafés mit schnellem WiFi durch Städte rennen, und ohne funktionierendem Internet gerne mal zu kleinen, gefrusteten Teufeln werden können, ist diese Unabhängigkeit, diese absolute Freiheit, die uns eben dieser Lifestyle und das Internet ermöglichen, etwas, was ich nie mehr missen möchte.

Aber genug von mir und meiner Art #OutofOffice zu leben. Ich höre nämlich immer super gerne, wie andere so ihr Arbeitsleben meistern. Deshalb bin ich nun auch Teil der Instagram-Challenge von Microsoft Deutschlandund sitze dort in der Jury. Du kannst auch mitmachen und ein paar richtig geile Preise gewinnen, wie zum Beispiel ein Lumia 930 oder ein Universal Mobile Keyboard! Dazu musst du einfach deine Lieblings-Arbeitsplätze und -Inspirationsorte mit dem Hashtag #OutofOffice bei Instagram teilen.

Also los – meine Jury-Kollegen und ich sind gespannt auf deine Bilder!

Ein Gastbeitrag von Sebastian Canaves (@s_canaves)
Reiseblogger, Digitaler Nomade und Blogpreneur

Advertisements

ConfigMgr 2012 and utilizing BranchCache in your environment basics

Posted on Updated on

Charlesa.us

There are plenty of articles out there for BranchCache but I wanted to through one together that covers the over-view of the basics with use with ConfigMgr 2012 and the logs that you can use to verify it is working.

Let’s first start with a little background on BranchCache:

* Was introduced with Windows Server 2008 R2 and Windows 7

* BranchCache clients can act as peer distribution points for other BranchCache clients on the same subnet (key other BranchCache clients)

How it works:

1.A BranchCache Content Server (BCS) breaks content into blocks with unique hashes for each block.

2.A BranchCache client requests content from the BCS.  The BCS responds with a list of blocks and hashes.

3.The client queries local peers for any of the blocks.

4.If the blocks are found on the local subnet, they are retrieved from peers.

5.If the any block is not available from a peer, it is retrieved from the BCS.  Once retrieved, it is made available to peers.

Branch Cache has two modes of operation:

– Distributed cache mode: content cache at a branch office is distributed among client computers.

– Hosted cache mode: content cache at a branch office is hosted on one or more server computers called hosted cache servers.

Distributed cache mode is designed for small branch offices that do not contain a local server for use as a hosted cache server. Distributed cache mode allows your organization to benefit from BranchCache without the requirement of additional hardware in branch offices.

Is there a way to pre-stage distributed and hosted cache content:

Yes, in some cases. Pre-hashing and preloading content is a new BranchCache feature for Windows Server 2012 and Windows 8.

For distributed cache mode, your content servers must be running Windows Server 2012, and your client computers must be running Windows 8. For hosted cache mode, your content servers and your hosted cache server must be running Windows Server 2012

Preloading content on hosted cache servers: https://technet.microsoft.com/en-us/library/jj572970.aspx

Branch Cache Ports:

  • Http (port 80) for content retrieval using BranchCache retrieval protocol
  • WS-Discovery (port 3702 UDP) for content discovery in distributed cache mode
  • HTTPS (port 443) for content upload in hosted cache mode using hosted cache protocol

Next let’s look at how it fits into working with ConfigMgr 2012 R2:

* Distribution points also support a feature of Windows Server 2008 R2 \ Windows Server 2012 and Windows 7 \ Windows 8.1 for BranchCache

* When enabled a copy of content retrieved from a server is cached in the branch office

* BranchCache feature caches HTTP, HTTPS, BITS or SMB based content

* There is no special configuration option in ConfigMgr 2012 to enable BranchCache since it’s not a feature or Configuration Manager 2012

* The only thing you need to do to configure is that your deployments are enabled for downloading and running the applications locally

* ConfigMgr only supports Distributed cache mode for BranchCache

Deployments:

– Software Update Deployments: Download Settings Dialog (Assure to check Allow clients to share content with other clients on the same subnet)

– Package Deployments: Distribution Points Dialog (Assure to check Allow clients to share content with other clients on the same subnet)

– Application Deployments: Content Tab on the Deployment Type (Assure to check Allow clients to share content with other clients on the same subnet)

Now that we have touched on the background of BranchCache and how it fits into play with ConfigMgr 2012, let’s look at some items that are important to note when using BranchCache in your environment:

* In order to benefit from BranchCache all machines at the location meant to utilize Branch Cache must be configured at the Operating System level for BranchCache, machines not configured will not partake in the BranchCache sharing

* BranchCache is not managed nor an active component for ConfigMgr simply utilized by deployments for systems that are enabled to support BranchCache

Ways to verify Branch Cache from a ConfigMgr deployment:

A. The following logs come into play with BranchCache

– DataTransferService.log

– FileBits.log

– ContentTransferManager.log

B. Validation Method for a deployment

(1) Start Performance Monitor

(2) Add monitor elements

(3) Add BranchCache (specifically)

– BITS: Bytes from cache

– BITS: Bytes from server

– Discovery: Attempted discoveries

– Discovery: Successful discoveries

– Retrieval: Bytes from cache

– Retrieval: Bytes from server

– Retrieval: Bytes served (show how much this computer is providing to other peers)

(4) Monitor these counters

(5) Deploy something NEW from SCCM ensuring that “download from distribution point” is enabled in order to force BITS.  You could also manually transfer data via BITSAdmin.exe

(6) Run this locally to see some good basic configuration and utilization info: netsh branchcache show status all

from Charles Allen’s Blog

Power Query Updates

Posted on Updated on

from MS:

***************

This month’s Power Query update includes four new or improved features including:

  • Enhanced Privacy levels dialog
  • New Text column filters: “Does Not Begin With…” and “Does Not End With…”
  • Improved Salesforce connectors
  • Improved “Excel Workbook” connector

*****************

Nevertheless the second is most interesting:

*****************

New Text column filters: “Does Not Begin With…” and “Does Not End With…”

We added a couple of new filters for Text columns so you can filter by “Does Not Begin With…” or “Does Not End With…” In previous versions of Power Query, these filters required custom formula editing, but are now much easier to apply by simply selecting them from the Text Filters menu.

Power Query June update 2************

original: http://blogs.office.com/2015/06/04/4-updates-to-power-query/

Data Nirvana: Power Pivot, OData, and Acumatica ERP

Posted on Updated on

below taken from powerpivotpro.com

Guest Post By: Tim Rodman

There is finally an ERP product that gets it, that embraces Power Pivot, Power BI, and the more than 800 million users of Microsoft Excel.

“What Power Pivot did to Excel, Acumatica is doing to the world of ERP”.

Acumatica recently announced the ability to securely connect to Acumatica ERP data through OData

This is huge. It’s as if two worlds are colliding, creating a good kind of explosion.

2015-05-31_02-38-40

You: “So, wait a second, what is ERP?”

Me: “Great question, I should probably back up for a minute.”

What is ERP

ERP stands for Enterprise Resource Planning and it is the computer system that ties all of the departments in your organization together.

If you work with Power Pivot, there is a very good chance that you also work with ERP data.
SAP, Oracle, Microsoft Dynamics, Infor, Epicor, Sage, and NetSuite are all examples of ERP systems.

Many of the ERP systems in companies today are very old and very ugly. The problem is that they are expensive to replace so they continue to exist through a patchwork of duct tape fixes that have been cobbled together over the years.

However, ERP systems contain a virtual Fort Knox of data that can lead to incredible insights if analyzed correctly (with Power Pivot of course).

The ERP Problem

However, like Fort Knox, the data in ERP systems can be very difficult to access.

ERP moves like molasses. It’s like the banking industry in that ERP is the last to adopt current trends in technology.

Accessing ERP data can require navigating through layers and layers of antiquated technology.

The CSV Solution

CSV stands for Comma-Separated Values.

I have always found it interesting how often Rob works with CSV files or some other exported data format in his Power Pivot models.

I often wonder if CSV files are the most frequently used data source in Power Pivot models.

ERP is a big reason for this. If you’re lucky enough to get access to the data without having to learn programming, you often have to manually export the data to CSV if you want to be able to do anything meaningful with it.

Updating CSV data is manual. When you want to update your Power Pivot model, you have to re-export the data from the ERP system manually. And this can’t be automated to take place every night while you’re sleeping.

Now, maybe you don’t sleep at night, preferring instead to scour the internet for the near infinite amount of Excel knowledge that it contains. But still, wouldn’t it be nice to eliminate the manual CSV refresh process from your nightly routine?

Of course, Power Query makes the TL part of ETL (Extract/Transform/Load) MUCH easier, but it still seems so archaic that you have to do the Extract part manually.

The Expensive Solution

If you happen to work for a larger company then you probably have a Data Warehouse. And you can point Power Pivot to it without having to manually export data.

But Data Warehouse solutions are expensive.

Even if you are lucky enough to have one, getting new data sources added often requires heavy IT involvement and can take a long time.

If changing a Power Pivot model is like working with clay, changing a Data Warehouse is like working with concrete. Jackhammer anyone?

The Agile Solution

Wouldn’t it be nice to bypass CSV and the Data Warehouse entirely to securely connect directly to your live ERP data?

If this were possible, then you could use Power Update to automatically refresh your data at night while you sleep (or while participating in Excel forum discussions) and your Power Pivot models would effectively become an agile Data Warehouse.

What if IT could make the ERP data easier to access without sacrificing the Fort Knox security that is required for ERP systems?

What if IT could do this just by checking a box on an ERP inquiry screen, without having to buy 3rd-party software or make any risky network firewall modifications?

What if a power user (no geeky developer necessary) could create new data sources using a nice graphical screen within the ERP application?

What if that data source could pass through the existing ERP security layer without the need to overlay an additional security matrix?

If this were possible, IT people and business people would spontaneously head for the nearest campsite, build a fire, and sing kumbaya together.

Seriously, I think it would look something like this.

I mean, wouldn’t this be great? S’mores anyone?

Enter OData

Well, I’m here to tell you that this is possible! And it’s possible today, like right now.

The foundation of the whole thing is OData.

OData is a protocol that Power Query and Power Pivot already understand; it’s already available on the Power Query and Power Pivot menus.

OData is like a United Nations translator that takes whatever foreign language the ERP data is speaking and translates it into a language that Power Pivot can understand.

OData is a secure, reliable, and increasingly popular way to deliver data. OData is already being used as the protocol behind the Salesforce CRM menu items in Power Query and the Microsoft SharePoint data connection in Excel.

When you see new data sources get added to the Power Query menu, the chances are that OData is the enabler behind-the-scenes.

Enter Acumatica ERP – Power Pivot’s new best friend

Acumatica is the world’s fastest growing Cloud ERP product.

Acumatica has stormed onto the SMB (Small and Medium-Sized Business) scene in the last few years. You can buy Acumatica and install it anywhere you like (on-premise, 3rd party hosting, Azure, Amazon, etc.) or you can license the SaaS version which is hosted on Acumatica’s servers. The cool thing is that you get the same product (running the exact same code/bits) regardless of which option you choose. This is rare in ERP these days because most ERP “cloud” products are completely different than the on-premise versions that bear their name.

Acumatica, on the other hand, is a 100% web-based application that was built from the ground up on modern Microsoft technology (C#.NET) starting in 2008 and it is turning the traditionally stodgy world of ERP on its head. You can even make Acumatica screens available in their mobile app without needing a developer.

Last month, Acumatica announced that they are exposing their data securely via the OData protocol.

Just imagine, you can now connect a fire hose from your ERP system (if you have Acumatica) to Power Pivot and flood your model with live data that can be refreshed with the click of a button! Additionally, you can run it through Power Query to better shape the data since Power Query also supports OData connections.

The Power Pivot / Acumatica Details

If you want to see more details, you can watch this 6 minute video where I describe how a power user can build a data source graphically within Acumatica:

You can then connect to the OData data source from Excel (or Power Pivot). This 5 minute video shows you how:

I think this is a big deal. If you want, you can read more about why I think the Excel / Power Pivot / Acumatica combination is so powerful.

Microsoft also thinks this is a big deal. They even listed it on the front page of their recent worldwide Build 2015 developer conference held in San Francisco.

Microsoft Build 2015

Acumatica OData at Microsoft Build 2015 occupied the stage for 5 minutes during the keynote as you can see in this 5 minute video:

Bottom line, Acumatica just became the most Power Pivot friendly ERP product on the planet.

And we are one step closer to Data Nirvana.


Tim Rodman is a “recovering CPA” who loves Excel (especially Power Pivot) and ERP software. You can find his blog posts over at www.AcumaticaReports.com.

Architecture Matters: Why We Re-Architected Everything When Our Competitors Wouldn’t

Posted on Updated on

good read by Brad Anderson, Corporate Vice President at MS, Enterprise Client & Mobility! Original link is here

A couple of years ago we had to make a very difficult choice about how we were going to deliver Enterprise Mobility management to our customers.

At the time, we knew that the solution had to be delivered as a cloud service since that was the only way we could help the 100,000’s of customers and 100,000,000’s of mobile devices using the solution stay up-to-date amidst the constant change in the world of mobility.

To do this, we had two options:

  1. Take SCCM and host it in the cloud and call it a cloud service
    or
  2. Build a true cloud service from scratch

Not a day goes by that I don’t grow more grateful that we decided on the latter.

Choosing option #2 came with costs, however.

Building our solution from scratch put us a little behind in the Enterprise Mobility Management market, but the agility it has enabled has made that worthwhile. For example, consider the volume of new value we have released through the monthly updates to Intune since November 2014. This agility will remain a key part of our differentiation, as well as a key part of your ongoing value, for years to come.

As I mentioned in a previous blog post, when we decided to move our EMM to the cloud, we did not simply port SCCM to Azure. Instead, we decided to invest in a cloud service architecture for our solution to Empower Enterprise Mobility. Aside from the obvious difference of single-versus multi-tenancy, and the need to embrace and interact with other Microsoft Services (AAD, O365, etc.), we knew a modern architecture was necessary to allow us to really benefit from the advantages the cloud offers. These benefits includes our continuous (micro-service) deployment, elastic scale, technology heterogeneity, etc.

This setup also provides a huge amount flexibility and autonomy to our engineering teams – which means they can innovate, update, and improve services constantly.

Intune’s mobility micro-services are fully hosted in Azure and, even if we were in a single datacenter, this would provide tremendous benefits. However, with the global reach of Azure, the benefits of availability, performance and jurisdictional governance and sovereignty are massive. (This is a topic I’ll cover at length in a future post.)

This use of Azure is a huge advantage. There is a mantra we use here that “we will follow Azure wherever it goes.” With 19 “regions,” Azure truly has a global footprint. Each of those regions has one or more datacenters with built-in redundancy within the datacenter, as well as redundancy to other regions.

Microsoft is continuing to build Azure capacity around the world to keep up with accelerating demand, deepen the global footprint, and accommodate specific needs, like regulatory requirements that require data to be kept within a country (China) or region (EU).

As Azure continues to stand up data centers around the world, Intune and all of EMS will take advantage of the capacity to deliver services in that region. Currently, our services run in North America, Europe, and APAC – with geo-redundancy within each of those regions.

clip_image002

There is an incredible amount of innovation happening inside these Azure datacenters. Everything inside of them is automated and driven through policy, and, with our “Cloud-first” engineering principle, we prove out new capabilities in areas such as software-defined networking, software-defined compute, and software-defined storage at-scale in these cloud datacenters and then deliver them for you to use in Windows Server and System Center. This is one of the reasons why our pace of innovation has accelerated so much for our on-premises products.

We are also innovating how we house the compute, storage, and networking in our datacenters. One example of this is our Chicago datacenter (pictured below).

clip_image004

clip_image006

I think the last thing anyone would guess when looking at the containers above is that this is a cloud data center.

Each of those containers house about 2,500 servers and we can pull the container in, hook it up to the power and the network, and have 2,500 additional servers up and running in minutes. When it is time to upgrade to new hardware, we simply remove the old container and replace it with the latest and greatest. The speed and agility this provides us is phenomenal.

With this as a background, here’s how some of the Azure platform components that we use in Intune (and in all our services) provide the scale, performance, availability, and reliability required by our customers around the globe:

Within Intune, each micro-service owns its own data. This architectural setup is very different from a traditional on-premises client-server product where everything is usually written and read from a single database. The compute requirements of the Intune service is affinitized with its data allowing for performance and each micro-service can scale independently via independent service partitioning. This means that each micro service is able to make changes and updates without requiring communication and coordination with other teams. Everyone working on Intune is thus able to move faster and the changes in one micro service do not introduce risks in other micro services. Bottom line: You get more value and capabilities faster – and they are more resilient.

Resiliency is a native part of this architecture, and many of our micros-services rely on Azure Service Fabric for availability and scalability. To give you an idea of just how much we value the resilient nature of our infrastructure (and, by extension, your organization) consider this: We maintain 5 (yes, five) replicas of our data in many of our micro-services. As nodes in Azure go up and down, the Windows Fabric automatically brings up a new instance of a service partition replica. Thus, there is no single point of failure in the infrastructure nor the data. In addition to automatically spinning up new nodes and seamless service functioning when this happens, we can also add new nodes into our subscription. When nodes are added, we can start moving hot service partition replicas to the new nodes. This gives us tremendous flexibility.

Also important is the “geo-affordances” provided by Azure. As noted above, we keep a lot of data in memory affinitized to compute, but, additionally, we write out to Azure Storage for those micro-services that need data durability. Today, we use Azure table and blob. This data gets replicated using Geo Redundant Storage. With this type of storage, a transaction is replicated synchronously to 3 nodes within the primary region and queued for asynchronous replication to a secondary region (hundreds of miles away from the primary), where the data is again made durable with triple replication. As a result, if there is a disaster, we can pick up with the persisted data from a completely different region. Bottom line: You get more value and capabilities faster – and they are more resilient.

We have done a massive amount of work to successfully architect the Intune and the EMS services in Azure as true cloud services. The end result of this unique architecture is a huge benefit for your organization.

Windows 10 kommt…

Posted on Updated on

auch kostenlos Upgrade für Windows 7!

…schon bald, nämlich am 29. Juli! Terry Myerson, Microsoft’s Executive Vice President of Operating Systems, hat die frohe Botschaft heute in “Blogging Windows” in “Hello World: Windows 10 Available on July 29“ bekannt gegeben.

image

Wer Windows 10 schon jetzt in der aktuellsten Technical Preview testen möchte, kann dies auf https://insider.windows.com/ jederzeit ausprobieren.

Microsoft wird Besitzern von Windows 7 und Windows 8.x Lizenzen bis 29. Juli 2016 – also für die Dauer eines Jahres ab Verfügbarkeit – ein kostenfreies Upgrade auf Windows 10 anbieten, siehe Upgrade to Windows 10 for free.

Technisch ist ein “genuine” Windows 7 Service Pack 1 (SP1) oder Windows 8.x erforderlich um das Upgrade durchzuführen. Das Windows 10 Upgrade kann vorab mit der “Windows 10 App” angemeldet werden und man erhält dann ab Verfügbarkeit eine Benachrichtigung, dass das Upgrade durchgeführt werden kann.

Die App wird auf den oben angeführten Windows Betriebssystemen über Windows Update eingespielt und erscheint danach automatisch als Icon im System Tray rechts unten. Die Notifikation kann in den App-Settings auch ausgeschaltet werden.

image

Details zum Upgrade-Prozess finden sich auf Windows 10 FAQ.

Auch auf der BUILD und auf der IGNITE Konferenz gab es viele Vorträge zum Thema Windows 10.
Schaut doch mal rein: Channel9.msdn.com: Ignite Windows 10 Sessions.

Last but not least noch ein Hinweis zu unserem Event bei Microsoft Österreich in Wien, am 19. Juni:
Das neueste zu Windows 10 und anderen coolen Dingen – Zusammenfassung von //build und Ignite
Hier präsentieren wir die wichtigsten Neuerungen aus der Microsoft-Welt – von der Community  für die Community. Wir arbeiten noch an der finalen Agenda und freuen uns aufs Event!