10/31/2013

Writing a ReSharper plugin. Quick fixes.

Introduction.


It is almost impossible to find a .NET developer who does not use ReSharper. The reason is obvious - ReSharper is always step ahead of Visual Studio in its refactoring, auto-completion and code-generation features. But not many know that since the 5th version ReSharper has a set of extensions points that allow developers to create their own productivity tools. 
In this series of articles we will deal with the creation of useful ReSharper plugin for code that uses .NET Reflection API. 

Starting up.


The first step you need to do to start ReSharper plugin development is to install ReSharper SDK. Installer adds project templates to Visual Studio and samples that cover most extensions points available. Samples may be found in “Program Files (x86)\JetBrains\ReSharper\v8.0\SDK\Samples\” folder. Although ReSharper extensibility points, base classes and interfaces lack of documentation, samples source code is well written and easy to understand so it will definitely give you a clue.

Implementing a ReSharper QuickFix.


QuickFix is a set of executable actions that modify a part of code where cursor is located. It is always associated with some kind of highlighting (custom highlightings will be described in the subsequent article). There are pretty a lot of quick fixes available in ReSharper and they are frequently used by developers (e.g. Make property public or internal, check for null some reference, optimize imports, etc.).
IQuickFix interface is quite simple and self-descriptive.

public interface IQuickFix
{
    IEnumerable<IntentionAction> CreateBulbItems();
    bool IsAvailable([NotNull] IUserDataHolder cache);
}

CreateBulbItems is responsible for reporting menu items that will be available for execution. IsAvailable used to report whether quick fix is available in current context. In most cases it is enough to inherit from QuickFixBase class. QuickFix needs to have a public constructor that accepts any type that implements IHighlighting interface as an argument. The last thing that needs to be done to make things working is to mark implemented class with QuickFixAttribute

[QuickFix]
public class SampleQuickFix : QuickFixBase
{
        public SampleQuickFix(AccessRightsError error)
        {
        }

        protected override Action<ITextControl> ExecutePsiTransaction(ISolution solution, IProgressIndicator progress)
        {
            return null;
        }

        public override string Text
        {
            get
            {
                return "Sample quick fix";
            }
        }

        public override bool IsAvailable(IUserDataHolder cache)
        {
            return true;
        }
}


Quick fixes that will be described in current article use highlightings that are already reported by ReSharper problem analyzers:
  • AccessRightsError – is reported when you are trying to access internal or private property in scope where it is not allowed.
  • NotResolvedError – for cases where reference expression couldn’t be resolved.

"Use Reflection" QuickFix.


It is a known fact that you may violate encapsulation principle by using Reflection to access private/internal fields, properties and methods. When using Reflection in such case you should be aware that you may leave an object in inconsistent state and it may behave as it was not designed to. So be aware!

As for me I can hardly remember any project where we didn’t need to access some internal methods and properties. Just as a quick example from previous project – we needed to modify default WPF DataGrid behavior on column header click so that instead of sorting it selects all cells belonging to that column. First approach with using public APIs failed as it was too slow. After that we have found that DataGrid uses internal methods for selection of large regions. And here Reflection came to rescue.

The quick fix implemented helps to generate Reflection code to access specific class member. You might remember that it is quite easy to miss some required BindingFlags, and you will see that your code doesn’t work only during program execution. 


Before diving into implementation details you may watch the video how “Use Reflection” quick fix works.



Although implementation code is a bit verbose for first example, it is easy to understand what happens there, especially if you have basic understanding of what AST is (ReSharper works with code representation in form of PSI, changes to the tree are mirrored in the editor at once).

public UseReflectionQuickFix(AccessRightsError error)
{
     _error = error;
     _declaredElement = error.Reference.CurrentResolveResult.DeclaredElement;
     _languageForPresentation = error.Reference.GetTreeNode().Language;
}


Public constructor accepts AccessRightsError highlighting with reference node, representing access to member that violates access rights. 
The main implementation resides in ExecutePsiTransaction method. The implementation handles different corner cases: 
  • Reflection runtime invocation returned value is always object. I wanted the quick fix to generate correct C# code thus I have added casting support to property type value.
  • Assignment operation treated separately.
  • When invoking a method - array of arguments needs to passed to InvokeMember method.

protected override Action<ITextControl> ExecutePsiTransaction(ISolution solution, IProgressIndicator progress)
{
    var accessExpression = _error.Reference.GetTreeNode() as IExpression;
    var replacementNode = accessExpression;           

    if (replacementNode == null)
        return null;

    var modifiers = _declaredElement as IModifiersOwner;
    if (modifiers == null)
        return null;

    bool isAssign = replacementNode.Parent is IAssignmentExpression;
    bool needsCasting = !isAssign && !(replacementNode.Parent is IExpressionStatement)
        && !_declaredElement.Type().IsVoid() && !_declaredElement.Type().IsObject();

    if (replacementNode.Parent is IInvocationExpression || replacementNode.Parent is IAssignmentExpression)
    {
        replacementNode = (IExpression)replacementNode.Parent;
    }

    CSharpElementFactory factory = CSharpElementFactory.GetInstance(replacementNode, applyCodeFormatter:true);

    AddSystemReflectionNamespace(factory);

           
    string flags = "BindingFlags.NonPublic";

    if (modifiers.IsStatic)
    {
        flags += "| BindingFlags.Static";
    }
    else
    {
        flags += "| BindingFlags.Instance";
    }

    flags += "| " + GetInvokeMemberBindingFlag(_declaredElement, isAssign);

    IExpression instanceExpression = modifiers.IsStatic ? factory.CreateExpression("null") : ((IReferenceExpression)accessExpression).QualifierExpression;
    IExpression argsExpression = factory.CreateExpression("null");

    if (isAssign)
    {
        argsExpression = factory.CreateExpression("new object[] { $0 }",
            ((IAssignmentExpression) replacementNode).Source);
    }
    if (replacementNode is IInvocationExpression)
    {
        var invocationExpression = (IInvocationExpression)replacementNode;

        if (invocationExpression.Arguments.Count != 0)
        {
                   
            argsExpression = CreateArrayCreationExpression(
                TypeFactory.CreateTypeByCLRName(
                "System.Object",
                accessExpression.GetPsiModule(),
                accessExpression.GetResolveContext()), factory);
            var arrayCreationExpression = argsExpression as IArrayCreationExpression;

            foreach (var arg in invocationExpression.ArgumentsEnumerable)
            {
                var initiallizer = factory.CreateVariableInitializer((ICSharpExpression) arg.Expression);
                arrayCreationExpression.ArrayInitializer.AddElementInitializerBefore(initiallizer, null);
            }
        }
    }

    var reflectionExpression = factory.CreateExpression("typeof($0).InvokeMember(\"$1\", $2, null, $3, $4)",
        ((IClrDeclaredElement)_declaredElement).GetContainingType(),
        _declaredElement.ShortName,
        flags,
        instanceExpression,
        argsExpression);

    if (needsCasting)
    {
        reflectionExpression = factory.CreateExpression("($0)$1",
            _declaredElement.Type(),
            reflectionExpression);
    }

    replacementNode.ReplaceBy(reflectionExpression);
    return null;
}


The following code is used to import “System.Reflection” namespace if it is not present in using directives.

private void AddSystemReflectionNamespace(CSharpElementFactory factory)
{
    var importScope = CSharpReferenceBindingUtil.GetImportScope(_error.Reference);
    var reflectionNamespace = GetReflectionNamespace(factory);
    if (!UsingUtil.CheckAlreadyImported(importScope, reflectionNamespace))
    {
        UsingUtil.AddImportTo(importScope, reflectionNamespace);
    }
}

private static INamespace GetReflectionNamespace(CSharpElementFactory factory)
{
    var usingDirective = factory.CreateUsingDirective("System.Reflection");
    var reference = usingDirective.ImportedSymbolName;
    var reflectionNamespace = reference.Reference.Resolve().DeclaredElement as INamespace;
    return reflectionNamespace;
}


For PSI tree creation CSharpElementFactory class is used. It provides method for creating different types of  AST nodes and immediate corresponding code formatting capabilities. Notice that for formatting it uses ‘$0’ placeholder instead of usual to .NET developers ‘{0}’.Element formatting works with ability to pass other PSI tree nodes.

"Did you mean?" QuickFix.

I will not dive deep into implementation details as I’m quite sure that you have got enough information already. This quick fix works with NotResolvedError highlighting and allows selecting type members that have most similar name to not resolved reference. As it was needed to provide multiple menu items implemented quick fix inherited directly from IQuickFix interface. Implementation uses Levenshtein distance to get most similar names and a part of ReSharper auto-completion API to get available symbols for specified reference. 


Debugging and testing.

If you have created plugin project from available template it will start another Visual Studio instance in debug mode without any actions required. Basically the project has set “Start Action” set to “Start External Program” and command line arguments “/ReSharper.Plugin ReReflection.dll /ReSharper.Internal”. Specified command line arguments makes ReSharper to load created .dll as a plugin. “/ReSharper.Internal” switch enables access to many internal menus that helps to debug your plugin, analyze PSI tree, etc.

It is hard to make your plugin stable without continuous integration. Luckily ReSharper provides base classes for most kinds of extension points available. For reference you may see tests that were implemented for described quick fixes.

Conclusion.


The first article in the series describes basics of ReSharper plugin creation. Of course the code is far from production quality and during debug mode you may encounter unhandled exceptions reported by ReSharper. In the next article I will describe how custom highlighting are implemented and what ElementProblemAnalyzer is.

The code of plugin is available on GitHub.

Useful links.



10/24/2013

Localization of e-learnings - Articulate Studio in focus


With the rapid development of digital content and electronic media, more and more e-learning materials are coming our way for localization. Some were particularly tricky for getting them localized properly. In this article we will focus on e-learning courses created in Articulate Studio.

For those who have never heard of this product, it’s a solution for PowerPoint enhancement and design of e-learnings on its basis. This tool is fit for creating short e-learnings, especially if you already have a prepared PowerPoint presentation.  

Let us assume that you made a presentation for your co-workers on a new reporting approach in your company. Isn’t it reasonable to reinforce the presented information by using applicable training materials? You may want to consider Articulate Studio as a tool to help you.

With this software you will be able to fill your presentation with interactive quizzes, flowcharts and animated explanations. However, its primary benefit consists in providing an interactive course with presentation material, extended questions, audio feedback and additional reading. As a matter of fact a common PowerPoint presentation turns into a SCORM-compatible course with all its advantages.

Yet, at the same time, one might experience certain difficulties when localizing this type of material. Let’s suppose your company has other offices overseas, therefore it would be great to have these types of materials adapted for colleagues in other countries.

The first and biggest problem we have encountered concerns the process of translating the text content. Articulate Studio is equipped with a convenient method of exporting texts to .doc file. Unfortunately, there is no reverse conversion function. Without a reverse conversion you will have to substitute the original text with the translations manually, which takes lots of time and efforts. PowerPoint has the same flaw, though.

Apart from this not all interface strings of the e-learning are exported to .doc file, such help texts as “Click next to continue”, “Your score”, “Result”, etc often do not get to the .doc file.
Another disappointing moment consists in the limited number of built-in interface languages (only 10), though the product allows to add new languages manually.
It was noticed that the title length of the Articulate Quiz or Articulate Engage has limitations. When translating the English title we had to considerably shorten the translation.

However, on the whole if putting aside the localization problems this software solution can be considered sufficiently smart and helpful. The author has a possibility to add interactive content, flash videos, various quizzes, sound effects, and additional learning materials in the form of supplements, etc. The product is perfect for companies that have a large stock of learning materials, such as presentations, and wish to import them to LMS.

Just yesterday I got a new Articulate Studio version. One of the new features is import in Excel spreadsheet or txt file. I will write about the improvements soon. I hope there is something for localization too.

About Articulate Studio - http://www.articulate.com/

By Bohdan Kruk

Senior Localization Specialist

10/15/2013

The not so short introduction to EC2 instances in AWS SDK for .NET

Intro

Today is the era of cloud computing: never-ending computing resources available on demand. Amazon is one of the biggest players on the market of cloud computing. It provides different cloud services: Elastic Cloud, Elastic Block Store, Simple Email Service, Cloud Drive and others. Amazon Elastic Compute Cloud (EC2) allows people to launch virtual servers in the Amazon Web Services (AWS) cloud. It provides various types of virtual computing environments, storage and virtual isolated networks.

In this post we’ll learn how to work with EC2 using AWS SDK for .NET. You can download SDK from the official website. We’ll assume you’ve already created some sample project in your favorite C#/.NET development environment and referenced AWSSDK.dll in that project. We’ll mostly use Amazon.EC2, Amazon.EC2.Model and Amazon.Runtime namespaces. So let’s take a look on some common operations with AWS cloud: launching, tagging and stopping EC2 instances, describing environment and others.

Instantiation

Every operation with AWS is executed using AmazonEC2 interface. We can create Amazon EC2 client just using simple constructor:

AmazonEC2 amazonClient = new AmazonEC2Client(accessKey, secretKey);

where accessKey and secretKey are credentials, which we can get in our Amazon account after registration.

Another simple constructor uses accessKey and secretKey from application configuration file and we have to pass just an element of RegionEndPoint enumeration:

public AmazonEC2Client(RegionEndpoint region);

In case we need something more sophisticated, there are tons of constructor overloads. One of the most useful is a constructor with AWSCredentials and AmazonEC2Config parameters:

public AmazonEC2Client(AWSCredentials credentials, AmazonEC2Config config)

For example, we can pass BasicAWSCredentials instance with accessKey and secretKey for the first parameter and setup proxy settings with AmazonEC2Config class and ProxyPort/ProxyHost properties for the second parameter.

Validating credentials

We can create any simple request to AWS in order to validate credentials. Let’s choose a request that won’t lead to big data transfer between our application and AWS services, because credentials verification can be used quite frequently. For example we will use DescribeAvailabilityZones request but we can also use DescribeRegions request or any other. If request call throws an exception (AmazonEC2Exception), we can check it’s type from string property ErrorCode. If it’s equal to “AuthFailure”, than our credentials are invalid. Source code can look like this:

try
{
    var ec2Client = new AmazonEC2Client(accessKey, secretKey);
    var response = ec2Client.DescribeAvailabilityZones();
   
    return true;
}
catch (AmazonEC2Exception e)
{
    return false;
}


Describing environment

If our application lets user to configure any API credentials or we’re just showing some useful information about it’s service, we will have to describe our Amazon environment: Key Pair Names, Security Groups, Placement Groups, Availability Zones, VPC subnets etc.

General requesting template looks like this:


var ec2Client = new AmazonEC2Client(accessKey, secretKey);
var describeGroupsRequest = new DescribePlacementGroupsRequest();

try
{
    var response = ec2Client.DescribePlacementGroups(describeGroupsRequest);
    var placementGroupsResult = response.DescribePlacementGroupsResult;
    var placementGroupsInfo = placementGroupsResult.PlacementGroupInfo;
    placementGroupNames = placementGroups.Select(group => group.GroupName);
}
catch (Exception e)
{
    placementGroupNames = Enumerable.Empty<string>();
}


If we’re going to describe many environment items and copy-paste code above into each method, it will do some odd job: each time creating and destroying instance of AmazonEC2Client class. That’s why we can create it’s instance only one time and then execute all needed requests, accumulate results in some storage and then return it.


Describing instances

DescribeInstances request is used to track all instances that we own. We can request useful information about all or some certain instances. In order to choose these certain instances we can fill filter parameter of DescribeInstancesRequest to match specified instance IDs, key pair names, availability zone, instance type, current instance state and many others.

var ec2Client = new AmazonEC2Client(accessKey, secretKey);

var describeRequest = new DescribeInstancesRequest();

describeRequest.Filter.Add(new Filter(){Name="instance-state-name",
Value="running"});


var runningInstancesResponse = ec2Client.DescribeInstances(describeRequest);
var runningInstances =
runningInstancesResponse.DescribeInstancesResult.Reservation.SelectMany(reservation =>
reservation.RunningInstance).Select(instance => instance.InstanceId);


Variable runningInstances will contain IDs of all instances that are running as a result of such DescribeInstancesRequest with instance-state-name filter. We can notice interesting code convention of Amazon SDK when list of objects (like list of RunningInstance instances) is named in singular (named RunningInstance). It's because of real nature of these classes: all of them are just object model of XML response of AWS.

Running instances

Running instances is definitely one of the main purposes of using AWS EC2. It’s a bit different for On-Demand and Spot Instances.

To launch On-Demand instance, we have to create a RunInstances request and fill it properties wisely. First, we need set Image Id to launch and preferable number of instances from this image. This preferable number consists of minimum and maximum number of instances to launch. If Amazon capacity allows to launch maximum number of instances it does so. If no it tries it’s best to satisfy us. Our request fails if Amazon is not able to launch minimum value of instances we requested. Secondly, we can specify Key Pair name, instance type, security group and many other in our RunInstancesRequest.

var ec2Client = new AmazonEC2Client(accessKey, secretKey);
 
var runRequest = new RunInstancesRequest();
runRequest.ImageId = imageID;
runRequest.MinCount = minimumValue;
runRequest.MaxCount = maximumValue;
runRequest.InstanceType = “t1.micro”;
// some other configurations

var runInstancesResponse = ec2Client.RunInstances(runRequest);
var runInstancesResult = runInstancesResponse.RunInstancesResult;
var runningIDs = runInstancesResult.Reservation.RunningInstance.Select(i => i.InstanceId);


Response contains actually started running instances. Such request can throw AmazonEC2Exception with ErrorCode “InstanceLimitExceeded” if we are not allowed to run as many instances as were requested (limitations of current plan in Amazon account).

Requesting on-demand instances can fail even not for our fault, but because Amazon is not able to provide us with such number of EC2 instances in our region at the moment. A “InsufficientInstanceCapacity” error code is thrown with AmazonEC2Exception in this case.

Tagging instances

EC2 tag is just a key value pair which we can assign to each running instance (and spot requests). Tags are useful if we want to supply our instances with some additional information (specific for our application). Keys and values are simple strings, but no one prohibit to base64-encode anything we like in that tag.
We can tag only already running instances with CreateTagsRequest: we need to specify list of instance id’s and tags we want to assign to it and any some additional information. See sample code below:

var ec2Client = new AmazonEC2Client(accessKey, secretKey);
 
var createTagRequest = new CreateTagsRequest();
createTagRequest.ResourceId.Add(someInstanceId);
createTagRequest.Tag.Add(new Tag { Key = “NodeRole”, Value = “LogCollector” });

ec2Client.CreateTags(createTagRequest);


Stopping instances

Just like running AWS SDK provides us with API to stop running instances. And the procedure  is also a bit different for On-Demand and Spot instances.

To stop On-Demand instance, we have to create TerminateInstances request and pass ID of an instance we want to terminate. Simple as that:

var ec2Client = new AmazonEC2Client(accessKey, secretKey);
 
var terminateRequest = new TerminateInstancesRequest();
terminateRequest.InstanceId.Add(instancesId);

var terminateResponse = ec2Client.TerminateInstances(terminateRequest);
var terminatingInstances = terminateResponse.TerminateInstancesResult.TerminatingInstance.Select(ti => ti.InstanceId);


Conclusion

AWS SDK allows us to manage EC2 instances easily and it’s consistent in terms of code conventions. SDK allows us to manage everything from code like if we were working from Amazon web dashboard and it’s worth looking into because cloud computing becomes a new trend today. The current post covers some basic operations with SDK. To continue learning AWS SDK take a look at official documentation.

10/02/2013

Getting started with PowerShell

PowerShell was released by Microsoft long ago – on January 30, 2007. Despite this many developers, build engineers and system administrators continue to use more familiar batch scripts (aka bat or cmd scripts). If you are one of such persons this post may help change your attitude towards PowerShell.

Running your first PowerShell script

Most probably you get used to run cmd.exe. Try powershell this time:


If you saw PowerShell console previously you’ll immediately argue: why console background is black but now awesome blue? Well, this is because blue color is set in Windows shortcut properties, so you need run PowerShell from shortcut to get it:



Awesome!
Now you can use PowerShell console in a way as if it is cmd console. Type cd, dir, mkdir, echo, rm, etc. You may even start believing you are working in cmd console.


PowerShell team did their best to make transition from batch scripts to PowerShell scripts as easy as possible. If you are familiar with Linux bash scripts you’ll also be happy with PowerShell:


All this is possible because of PowerShell aliasing feature. Commands (in PowerShell they are called cmdlets, pronounced command-lets) have aliases:

Here we see that dir is simply an alias for Get-ChildItem cmdlet. And by default Get-ChildItem cmdlet has three aliases: dir, gci and ls. You can create new aliases if you wish. Let’s discover real names for all aliases we used in our first script:

PS C:\tmp> get-alias cd, echo, type, rm
CommandType     Name
-----------     ----
Alias           cd -> Set-Location
Alias           echo -> Write-Output
Alias           type -> Get-Content
Alias           rm -> Remove-Item

As you can see PowerShell cmdlet for navigating to new location is Set-Location, but it is so much simpler to just type cd.

Is PowerShell better than cmd?

Have you ever tried to write simple if statements in cmd script, or (tears on my eyes...) for loop?

REM print numbers from 1 to 10
for /l %x in (1, 1, 10) do echo %x

And that code works only if you write it in cmd console. If you try save it to a file and run the file (try) you'll be surprised. Here is PowerShell code analogue:

# print numbers from 1 to 10
for ($i=1; $i -le 10; ++$i) { echo $i }

The only strange part of that code is -le (you cannot use < and > in shell scripts), everything else is so simple and familiar.
With PowerShell you have full power of .NET in your hands. Cmd reminds about old DOS times.
PowerShell is not just a command-line shell – it is powerful scripting environment. You can write own "methods" (cmdlets), use third-party cmdlets libraries.
PowerShell comes with Windows PowerShell ICE editor in which you can write/run/debug your scripts:

And finally, PowerShell has a default blue background and cmd has a default black background. Just kidding.

Running PowerShell script stored in a file

By convenience PowerShell scripts are stored in files with ps1 eextension. Create simple script and save it to test.ps1 file:

# this is comment
Write-Output "Hello from PowerShell script file"

In explorer navigate to directory with test.ps1 file and double-click on it. Most probably test.ps1 will be opened as simple text file in notepad.exe. This is because by default ps1 files are associated with notepad.exe. This saves users from accidentally running malware scripts (a common problem of bat and cmd scripts).
One of the easiest ways to run a script is to launch PowerShell console, navigate to directory with test.ps1 file and type .\test.ps1 then Enter:

PS C:\> cd C:\tmp
PS C:\tmp> .\test.ps1
File C:\tmp\test.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details.
At line:1 char:11
+ .\test.ps1 <<<<
    + CategoryInfo          : NotSpecified: (:) [], PSSecurityException
    + FullyQualifiedErrorId : RuntimeException
PS C:\tmp>

One more anti-malware protection: by default you cannot run PowerShell files! Let’s fix this. Run PowerShell as administrator and type following command:

PS C:\Windows\system32> Set-ExecutionPolicy RemoteSigned -Force

Above line changes PowerShell scripts execution policy from Restricted to RemoteSigned – allow run all locally created scripts. More about PowerShell execution policy you can read on the Internet.
Now you can finally run test.ps1 script:

PS C:\tmp> .\test.ps1
Hello from PowerShell script file
PS C:\tmp>

Success! Yes, it took a bit of investigations and work to get things working.

PowerShell in action

Let’s see some examples of useful things you can do with PowerShell.
  • Find all Internet Explorer processes and terminate them:
Get-Process iexplore | Stop-Process
  • Restart TeamCity Server service:
PS C:\tmp> Get-Service TeamCity | Restart-Service
  • Create Windows shortcut, for example to notepad.exe:
$shell = New-Object -COM WScript.Shell
$shortcut = $shell.CreateShortcut("C:\tmp\ShortCutToNotepad.lnk")
$shortcut.TargetPath = "%windir%\system32\notepad.exe"
$shortcut.Save()
  • Get 10 newest records from System event log:
Get-EventLog system -newest 10
  • List Windows Registry Key containing auto-start applications:
cd HKLM:\Software\Microsoft\Windows\CurrentVersion\Run
Get-ItemProperty .


Using C# code in PowerShell script

Being .NET based technology PowerShell gives you full power of .NET platform. Your script needs generate random number? No problem – just use System.Random class. Actually PowerShell allows you to write scripts almost in C# language, so you can implement arbitrary custom logic in PowerShell!

PS C:\tmp> $rnd = New-Object System.Random
PS C:\tmp> $i = $rnd.Next(10)
PS C:\tmp> [System.Console]::WriteLine("And the answer is ... {0}", $i)
And the answer is ... 7
PS C:\tmp>

Incredible, right?
What time is it now?

Get-Date
Tuesday, October 01, 2013 1:17:12 PM
[System.DateTime]::Now
Tuesday, October 01, 2013 1:17:22 PM

You can even use Windows Forms .NET API to create and show graphical dialogs directly from PowerShell:

Add-Type -AssemblyName System.Windows.Forms
$form = New-Object Windows.Forms.Form
$form.Size = New-Object Drawing.Size @(200,100)
$form.StartPosition = "CenterScreen"
$form.Text = "Hello!!!"
$form.ShowDialog()

Want use Amazon .NET API to start a new micro instance? Download Amazon .NET API and copy AWSSDK.dll to C:\tmp directory. Then write a script:

[Reflection.Assembly]::LoadFile("C:\tmp\AWSSDK.dll")
$AWSAccessKey = "<your access key>"
$AWSSecretKey = "<your secret key>"
$ec2 = [Amazon.AWSClientFactory]::CreateAmazonEC2Client($AWSAccessKey, $AWSSecretKey)
$request = New-Object Amazon.EC2.Model.RunInstancesRequest
$request.ImageId = $amiImageId
$request.InstanceType = 't1.micro'

$ec2.RunInstances($request)

Again, all above code is very similar to C# code. With time you’ll get used that all variable names in PowerShell must start from dollar sign ($), instead of new keyword you’ve to write New-Object, square brackets double colons static methods invocation syntax and other peculiarities.

PowerShell Remoting

PowerShell would not be so popular without PowerShell Remoting feature which was introduces in PowerShell 2.0. It allows you run PowerShell scripts on remote machines alike you run them on your local machine. It is very similar to Linux Secure Shell protocol (SSH). As everything potentially dangerous in PowerShell, PowerShell Remoting is not enabled by default. And you cannot enable it remotely :). So you must have RDP access to machine on which you want enable PowerShell remoting and be a member of Administrator group on that machine. You can Google how to enable PowerShell remoting. Here is short instruction:
  • RDP to TEST-PC and start elevated PowerShell console (Run as Administrator).
PS C:\Users\roman.turovskyy> Enable-PSRemoting -Force
WinRM has been updated to receive requests.
WinRM service started.
  • Return back to your working machine and start elevated PowerShell console. Enter following:
PS C:\windows\system32> cd WSMan:\localhost\Client
PS WSMan:\localhost\Client> Set-Item .\TrustedHosts * -Force
  • This will allow you to connect to any machine. Now enable PowerShell remoting on your machine and establish remote session with own machine (localhost):
PS C:\tmp> Enable-PSRemoting -Force
PS C:\tmp> Enter-PSSession -ComputerName localhost
[localhost]: PS C:\Users\roman.turovskyy\Documents>
  • Above line indicates that PowerShell remoting on your machine is properly configured. Now establish remote session with TEST-PC machine:
[localhost]: PS C:\Users\roman.turovskyy\Documents> Exit-PSSession
PS C:\tmp> Enter-PSSession -ComputerName TEST-PC
[TEST-PC]: PS C:\Users\roman.turovskyy\Documents>

Now you can run PowerShell commands on TEST-PC! Ensure that this is really TEST-PC by reading computer name from environment variable:

[TEST-PC]: PS Env:\> echo $env:COMPUTERNAME
TEST-PC

With PowerShell remoting you can run your script on many machines. For example you have a task to stop Windows Time service on many machines. Once you have PowerShell remoting configured on these machines you can use following script:

@('TEST-PC ', 'TEST-PC2') | foreach { Invoke-Command -ComputerName $_ -ScriptBlock { Get-Service W32Time | Stop-Service } }

$_ refers to current iteration value in foreach statement. You can also store enter-separated list on machines in a file:

PS C:\tmp> Get-Content machines.txt | foreach { Invoke-Command $_ { $env:COMPUTERNAME  } }
TEST-PC
TEST-PC1

Conclusion

PowerShell is very powerful shell scripting language. Being built on top of .NET it allows reusing existing .NET classes and writing arbitrary custom logic almost like in C# language. PowerShell remoting enables remote scriptable control over computes within your network. On our project we use PowerShell to deploy products on many machines within LAN. If you are not using PowerShell yet – take a look at it!