Talking Tech Today http://talkingtech.today Talking Tech Today Sun, 09 Jul 2017 14:28:27 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 107099233 Whats makes for a fast development PC http://talkingtech.today/whats-makes-for-a-fast-development-pc/ http://talkingtech.today/whats-makes-for-a-fast-development-pc/#respond Sun, 09 Jul 2017 14:28:27 +0000 http://talkingtech.today/?p=71 When you are choosing a PC for development work to get the maximum productively there are the following factors to consider:

  • CPU Speed
  • Storage( e.g SSD)
  • Memory
  • Monitors

Also for each of the above, I will also provide some guidance on return on investment. After all, there is no point in overpaying for a PC also we need to consider how much the average developer earns. Based on developer salaries in the USA the average salary is close to $100,000 or around $50 per hour.

Developers are expensive, so it’s important to give them great equipment so you can get the best value from them. A slow less expensive computer is a false economy, as the small cost difference in getting fast computer will be quickly recovered from the increased productivity of a developer.

One of the worst things for developer productivity is interruptions, as it can take developers 15mins+ to get back to being productive. So the slower the PC the higher the chance a developer will be interpreted. Waiting minutes for a computer to finish a task will result in the developer’s mind wandering(after all we are all human). Then it will take time for the developer to get back up to being productive, these interruptions need to be avoided at all costs.

Modern development is very intensive on computers reasons, most build processes are multi-core now, unit tests run on build or code changes, web pages are updated with the latest code, mobiles emulators are updated etc. It’s very easy for a development computer to become slow. Which introduces interruptions to the development workflow and hence non-productive for a developer.

CPU Speed

This can make a surprising difference in how much time a developer ends up waiting on the PC.  Here is an example of the difference it can make.

I have an HP Workstation Z420 with a Xeon E5-1620 and a PC with an Intel Sandy Lake i7 6700k overclocked to 4.6gzh. Both are quad-core processors. The i7 is 2x as fast as the HP workstation in compiling code(5mins vs 2.5mins)!! I was very surprised when I ran the benchmarks. I though the Xeon CPU would give the consumer i7 CPU a run for its money. I was very wrong.

Since the cost when ordering a computer is only a few hundred to upgrade to the fastest i7 processor it is well worth the investment. Especially considering most computers are good for three years or more of use.

For the CPU I would recommend the fastest quad-core i7. If you find like me that your CPU is sitting a lot of the time between 50% to 100% utilization, then consider intel’s latest 10 Core i9 processor. The productivity gains from the increased performance would pay for this upgrade.

Storage

This one is a no brainer, just get SSD’s as they are x100’s of time faster than an HDD. Why wait minutes for a reboot when it can happen in seconds. As well as opening apps 10x quicker.

I would recommend the following setup:

  • One for the boot drive – 512gb
  • One for the source code -512gb or larger
  • One for Virtual PC’s – 1tb or larger(if required)
  • One large HDD.

The boot drive should always have it own SSD and the source and all development files on another SSD. This setup provides a clean separation between the OS and the source code. When the PC needs to be rebuilt the OS drive can be safely formatted with no risk of losing any source code.

Also since virtual PCs than to take up a lot of space, I give them their own SSD to be stored on. A common problem is that virtual PCs can use all the disk space. When they are on their own SSD they won’t affect the OS or Development drive by causing it to run out of space.

The HHD in my system is used for backups. I use CrashPlan to keep all my changes backed up locally as well as remotely.

 

Memory

There are a number of factors to consider for this.

For the web or mobile development,  32gb is a great starting point. I normally use around 24gb of ram when doing web/desktop development, so 32gb leaves a bit of space for other tools.

When using virtual PC’s to simulate other servers or development environments, then 64gb would be the best. Then you have the headroom to simulate and test some quite complex environments.

For virtual PC’s I also have another computer under my desk that is dedicated to virtual PC’s. Virtual PC’s can be demanding on what resources they need. This is my old development PC but it is more that up to the task of running the virtual PCs and is also a great way of recycling old equipment.

What you do not want is for your computer to run out of memory and start paging memory to disk. Then your computer will be running 100X slower as paging even to SSD’s is massively slower than using computer memory.

 

Monitors

Adding extra Monitors boosts your productivity. There are a number of studies showing a second monitor will boost your productivity by 35%!! That’s an extra $30,000+ worth of work each year for the average developer, for a $500 monitor. This is an amazing return on investment.

I run three monitors which I find a perfect number for web development. One monitor has the IDE(e.g Visual Studio) and next one the website I am working on and in the third one other tools I need like database tools, specifications etc.

Tip: Always hide/close email and team chat windows(e.g Slack), that way you won’t get distracted when working. Having then opened in one of the monitors is just asking to be distracted.

Having upgraded to 27″ 4k  monitors I have found these to be brilliant to use, they lower eye strain and allow you to concentrate for longer.

Summary

For a new development PC I would recommend the following specs

  • Intel i7 or i9
  • 2 to 3 SSD’s, One HHD
  • 32Gb  -> 64 GB RAM
  • 2 to 3 27″ 4K IPS Monitors.

The extra $$ to upgrade to the fastest computer components will quickly pay for themselves through the increased developer productivity.

 

]]>
http://talkingtech.today/whats-makes-for-a-fast-development-pc/feed/ 0 71
Intel SSD wear and tear stats after 7 years of usage http://talkingtech.today/intel-ssd-wear-and-tear-stats-after-7-years-of-usage/ http://talkingtech.today/intel-ssd-wear-and-tear-stats-after-7-years-of-usage/#respond Mon, 17 Apr 2017 09:05:44 +0000 http://talkingtech.today/?p=138 I had to rebuild one of my servers recently. So I decided to have a look at the wear level for its primary SSD drive and I was very surprised at the usage and wear stats after 7 years of usage. I would have though it would have been getting close to its end of life, turns out its almost the opposite.

I bought the Intel X25-M SSD soon after it initial release after a lot of positive reviews. It cost me around $1000 and was worth every penny.

Initially it was the boot drive for my development PC, I remember been blown away by just how fast this drive was. It was a game changer. As I have needed more storage space I have swapped this drive out for more modern and larger SSD’s.

Since this drive was an excellent performer I have used it as the boot drive and SQL data drive for one of my servers. So for the past 5 years it has run as a Continuous Integration/Deployment Server running Windows Server, SQL Server, Jira, Bamboo and more recently Octopus.

Today as I am rebuilding the server I thought I would look at its wear stats, after 7 years of usage. I thought the SSD would be close to its end of life.

Well it’s still got 96% of its life left!!!!! So after 7 years of usage it’s got about another 168 years of life in it!

It’s written 16.16tb of data during that time. That’s over 1000x its storage capacity.

When I first got SSD’s for my computer I was worried how long they would last. The reality is that they will last for a very long time! On a side note all the standard hard drives I had brought back then have all failed.

The SSD drives I have are way more stable and have better data securely that any of the hard drives I have owned. I always run hard drives in a Raid 5 or mirrored mode to ensure data safety, but with SSD’s I dont worry about needing to do this. Of course I still ensure my SSD’s are fully backed up off site via Crashplan.

I have to take my hat off to Intel for producing such a great SSD!.

 

 

img credit: https://commons.wikimedia.org/wiki/File:Intel_X25-M_Solid-State_Drive.jpg

]]>
http://talkingtech.today/intel-ssd-wear-and-tear-stats-after-7-years-of-usage/feed/ 0 138
Continuous Deployment – Long Running Batch Tasks http://talkingtech.today/continuous-deployment-long-running-batch-tasks/ http://talkingtech.today/continuous-deployment-long-running-batch-tasks/#respond Wed, 25 Jan 2017 12:08:55 +0000 http://talkingtech.today/?p=163 Continuous Deployment is an incredibly stable and versatile method of designing software. The beauty of this system is that any task, at any time, could be interrupted by an update. Yet despite these interruptions the systems maintain constant integrity and stability.

This is because Continuous Deployment software has been designed from the ground up to manage interruptions automatically. Building your software using Continuous Deployment methods will insure your software is robust. Thus whether planned interruptions take place, or more unexpected system failures (i.e. hard drive problems) the software will maintain constant integrity and stability.

A key part in Continuous Deployment is the appropriate design of long running batch tasks. Long running batch tasks refers to an algorithm that can take anything from minutes to hours to complete (compared to Short Running Tasks that take seconds).

Processing Long Running Batch Tasks.

Long Running Batch tasks’ have three main attributes:

  • Each task stops quickly on request.The quicker the better so the system can finish the update. This should be ideally in under 10 seconds. Longer running tasks will need to be terminated to allow the deployment to continue.
  • Each task’s output is a single database transaction. This means if the processing is stopped the database is left in a consistent manner.
  • If you are calling third party services, then the results from the services should be logged, so if a task needs to restart it can check the status of the third party services and avoid calling them 2 or more times. e.g. you only want to process the credit card once!!

For example if you have to process 500,000 price updates, the first task would be to batch these updates into smaller chunks that can be processed quickly e.g In under 10 seconds. I have often found the best batch size to be 1. With larger batch sizes if the processing is stopped, you need to work out where you left off.

The basic processing loop

All the batch processing steps – from Initial Setup to Processing to Clean up –  all need to follow the below basic processing loop.

Since your code can restart at any time, when the code is restarted it needs to be able to do three things

  • Work out where it left off
  • Do any clean up
  • Start the next processing step

The important part is the ability for the code to work out (1) where to restart the process and (2) whether any clean up needs to be done.

The clean up stage refers to removing any partial processing from the system, so when the process resumes no double entries occur. However this isn’t so much of a problem if all the work is on a database. This can be covered by a single database transaction, because f the processing is stopped all the changes will be automatically rolled back.

When you start using queues and files for storage then some thought needs to occur on how to pause and resume processing with your code.  This effort will result in your code been super robust.

Batch Processing Example

Lets look at an example. You have a customer who FTP’s a file to you nightly for processing. The following is a overview of the process. The first two steps are Batch Creation and Batch Item Processing.

 

Batch Creation

Below is a simple process for initializing a batch, to prepare if for processing.

This is the processing loop to prepare a Batch for processing.  This process can be paused at any time and when restarting the system will pick up where it left off.

Since the system only deletes the FTP file once all the batch steps have been created in the database, we can make sure everything is ready to be processed once the file as been deleted. Then the system can move on to processing the batch items.

Batch Item Processing

Below is a simple process for processing a single item in the batch.

 

If you are using database transactions then you will probably never need to worry about clean and rollback steps. Since if the processing was interrupted, then the database would have taken care of that for you automatically.

If you follow these methods you will create robust software that can be used with continuous deployment.

 

Photo Credit: https://upload.wikimedia.org/wikipedia/commons/7/7d/FileStack.jpg

]]>
http://talkingtech.today/continuous-deployment-long-running-batch-tasks/feed/ 0 163
Thoughts on Continuous Deployment and Failure Management http://talkingtech.today/thoughts-on-continuous-deployment-and-failure-management/ http://talkingtech.today/thoughts-on-continuous-deployment-and-failure-management/#respond Sun, 20 Nov 2016 15:23:54 +0000 http://talkingtech.today/?p=121 I have been using Continuous Deployment for over 2 years now. It has completely changed how I write code and manage failures. The systems I design and build are now a lot more stable, thanks to Continuous Deployments.

There are two parts to continuous deployment, the actual updates themselves, and design of the code to manage the temporary outages during the updates.Continuous Deployment is just an interruption to the code, a kind of failure. However designing the code to handle these failures forces you to think about all the other failures too, as they could potentially occur during an update.

Some people think you can’t use Continuous Deployment because the updates will break the user experience. Therefore you shouldn’t use it for sites that need to remain continuously available i.e. shopping sites. However I believe continuous deployment forces you to design the code in a way that makes it more overall stable and available.

Continuous Deployment forces you design code that can cope with a service being temporarily unavailable. There are a number of design patterns and architectural techniques that allow the code to cope with these unavailable services, and thus not be affected by Continuous Deployment updates.

Therefore, Continuous Deployment is not only about rapid updates to your sites of code but also about failure management. When you use Continuous Deployments it forces you to think about how you code will fail. Since during an update services will be temporarily unavailable so your code needs to be stable when and if these issues arise.

For example, what if the user clicks on checkout just when your web server is updating. Uh oh!! Well if you did the call via ajax, on failing to update the primary web server the code would then send the transaction to a second server (maybe for processing once everything’s back up) and return the user to the confirmation screen. Interestingly, if this is coded correctly, the user would not even be aware that the primary service was unavailable due to an update!

High quality coding should be able to survive a temporary interruption to service. High quality coding allows the user to have an uninterrupted service, no matter what updates are happening in the background. The ajax example is one of multiple techniques you could use to create code that can withstand interruption.

For code to be robust it need to be able to handle all types of failures, since at the end of the day a failure will occur.   Setting the code up to handle continuous deployment means all these potential failures will have been thought about already.

The only difference is how well will you code to manage the failure.

Some common failures:

  • Server Failure
  • Network Failure
  • Lost Request
  • Overloaded Errors
  • Software Updates
  • etc

Continuous Deployment is just another failure but by using it forces you to think about all the other failures too, as they could potentially occur during an update. This not only makes your code more robust, but it also increases your uptime for clients. This is because most failures are now managed. so when the worst case failures occurs your code can already cope with almost all of them.

Netflix actually take this to the next level, they have a tool called Chaos Monkey which actively breaks parts of their system by taking down servers, turning off network cards etc just to make sure there code can handle failures. The clients are unaware and the service is seamless.

Make the jump. Switch to Continuous Deployment and watch your codes quality and robustness increase. It’s a win win.

 

 

]]>
http://talkingtech.today/thoughts-on-continuous-deployment-and-failure-management/feed/ 0 121
Stored Procedures and Entity Framework Compared http://talkingtech.today/stored-procedures-and-entity-framework-compared/ http://talkingtech.today/stored-procedures-and-entity-framework-compared/#respond Sun, 30 Oct 2016 15:22:17 +0000 http://talkingtech.today/?p=114 Which is better for development, Stored Procedures or Entity Framework for modern web development.

I will compare these frameworks using the following criteria.

  • Performance
  • Security
  • Business Logic
  • Code Discovery
  • Refactoring
  • Amount of Code
  • Scaling

Performance

SQL is a language that is was written to allow processing of sets of data. e.g. Update all the records in the table to today’s date. This is great if you have massive sets of data that need processing. Thing is most LOB(Line of Business) apps only tend to update 1 row of data in a table. So the full power of Stored Procedures is rarely used.

SQL Server can pre-compile Stored Procedure for performance. When Entity Framework(EF) sends a request via a parameterized query, this is also complied and stored by SQL Server. This results in both Stored Procedures having the same performance for queries.

Also if it is just as easy to write badly performing Stored Procedures as EF Queries. In the past I have managed to speed up both SPs and EF Queries by 100x to 1000x.

I have seen a lot of dynamic sql code in Stored Procedures, since Stored Procedures are hard to write. Problem is as soon as you use dynamic sql code in SP’s this is slower since SQL server now as to compile this query each time.

Both can perform well, but only if you understand SQL Server and SQL queries really well, as the same SQL mistakes can make either method slow.

Security

SQL server provides a comprehensive security system but I have yet to see a web system that doesn’t use a single login to access the SQL server which renders most of the SQL security inactive.

Instead most applications move the security model into code, which makes both SP’s ad EF equal in terms of user security.

One issue that I have come across recently is the use of Dynamic SQL in stored procedures, which introduces two issues, the first been performance since SQL can no longer use a precompiled version and the second being that the code is now vulnerable to SQL Injection Attacks! The dynamic code was used since the programmer had difficultly getting SQL to do what they required and the dynamic code was the easiest workaround. Unfortunately this also introduced major security problems.

Business Logic

I  am working on a number of applications at the moment, one based on EF and the other SP’s.

With the EF app the business logic resides in the C# code, which makes it trivial to find and navigate all the relevance pieces of code. Visual Studio and ReSharper both provide excellent tools for searching code.

The app using SP’s has business logic randomly distributed through the SP’s and C# code. Making it next to impossible to work out what is happening let alone make safe changes to the code.

Using EF and C# is definitely the way to go.

Code Discovery

Say you wanted to find all the bits of code that updated a field in your database. I simple task that you often have to do when working out how some code works.

Using C#and EF with Visual Studio and Code Lens, you can quickly look at all the places it is used.

With the code base using SP’s, it’s just not possible in any simple fashion that isn’t massively time consuming. You have to use a global search for the field with the major problem being that the search will return maybe 100’s of results that have nothing to do with what you are looking for.

EF and C# wins this hands down, it’s simply no contest.

Refactoring

I have had this issue recently on a some different projects.

On the one that was primarily stored procedures it was a nightmare. Since the SP’s where in SQL server and the access code was stored in Visual Studio Projects. The only way to find all the field to rename was to do a global find and replace that took ages. Even when I finished I wasn’t 100% sure I had got everything.

On the code base that had was EF based, I used the “Rename” refactor and created an new migration. The job was done in 5 mins.

Once again EF and C# wins this hands down.

Amount of Code

This one is always important, as the lower the lines of code you have the lower the chance of a bug.

When using EF , it only take a few lines of code to do a query. 1 to open the DB connection and another to do the query.

This Stored Procedure there is a stack of scaffolding code you need, to open the connection, set up the SQL command, pass the parameters in,  and then execute the command. Then if you have a DAL you will need code to convert the result into objects. Then you also need to write the Stored Procedure as well.

EF and C# wins this easily as well, less code is less bugs and the less code you have to write the more productive you will be.

Scaling

When if comes time to scale your app, the less work SQL Server has to the better the chances of you scaling your app.

If all the work done via Stored Procedures and the SQL CPU is close to 100% most of the time, you are going to have issues very quickly. Same if the SQL Server’s disk IO is close to 100%.

With higher loads you want to shift more and more of the work to the web and application servers and away from the SQL server box. After all 100 or even 10 web servers are going to have more network, CPU and Memory than a single SQL Server.

With EF since all the business logic is already in code, you now have the ability to add caching layers and other techniques to remove the load from SQL Server.

With Stored Procedures since your business logic is in them, your opperiatunes to reduce load are limited until you can move the business logic to code.

I going to give this round to EF and C# as well.

Closing Thoughts

For any new project you should consider using an ORM like EF. The gains it will give you in productivity will outway any perceived shortcomings.

If EF doesn’t quite meet your needs there are a stack of other ORM’s out there. Like Hibernate etc. There is sure to be one that will work for you.

On a side note, most No-SQL databases have no concept of Stored Procedures since they don’t really make a difference when scaling an application and they introduce a lot of unnecessarily complexity.

]]>
http://talkingtech.today/stored-procedures-and-entity-framework-compared/feed/ 0 114
Choosing a Database for your next project http://talkingtech.today/choosing-a-database-for-your-next-project/ http://talkingtech.today/choosing-a-database-for-your-next-project/#respond Thu, 14 Jul 2016 15:02:19 +0000 http://talkingtech.today/?p=66 When you start a new project there are massive number of ways to store your data  to choose from. Here are some examples

  • Relational Databases
  • Document Databases
  • Key Value Database
  • File System
  • etc

From experience 99% of the time for new projects the best data store to use is a Relational Database. Having said that the key factor is understanding your data and how you store/access that information and then choosing the appropriate data storage system. Don’t make the big mistake of using a technology just because it’s the current fad, as this will most likely end up with massive rewrite when the current fad doesn’t pan out. I’ve been wanting to use MongoDB for years, but have yet to find a project were this is a good match. Let the data choose the storage method, not what you want to use.

Every system will work and perform well with a small sample data set, so unless you have done lots of careful planning you are not going to find issues until you are live and in production and by then it may be too late to fix.

Here is a article on the issues caused by using the wrong data storage system.

Relational Databases make an excellent starting starting point and I will explain why.

Relational Databases scale really easily.

Relational databases scale very very well. For example stackoverflow.com which is in the worlds top 50 web sites is powered by Relational Database(Microsoft SQL Server). In fact it works so well they have close to 0% CPU usage on their database servers!! That’s insane!

One common technique with Relational Database is to add a read replicas. So all query’s(reports) go to this secondary server(you can add more if required). This technique reduces the load on the primary server. As reports tend to be the heaviest usage of databases adding read replicas allow you to scale these operations.

With Modern hardware e.g. SSD Hard Drives, Terabytes of Memory, 10’s of CPU cores, not many system are going to hit the limits which allow Relational Databases to scale amazing well. It’s a lot easy to scale vertically than horizontal. It’s also lot easy to maintain 1(using vertical scaling) server than 10’s(using horizontal scaling) of servers.

Relational Database’s are easy to program against.

The tools for Relational Databases are very easy to work with. When you use tools like Entity Framework for .Net or (Hibernate for Java)they are so simple to use, you almost forget that you are using a relational database. The ecosystem for Relational Databases is incredibly rich, from Monitoring servers, Reporting Engines, Code Generators etc there is no shortage of tools. Also if you hit an issue, someone else is bound to have hit it too. So sites like Stackoverflow.com have tons of helpful tips.

Relational Database keep your data safe

An important part of relational database is that they have a feature called referential integrity. It helps to ensure that any data you add or delete will be valid and won’t have any broken records. Also each field can have extra checks to help ensure that the entered data is correct.

It can be a pain at times but the extra safety the checks Relational Databases have are well worth the investment of using these features. Nothing is life is free, but for the low effort to enable and use referential integrity etc the gains are fantastic and they have saved me more than once. Some other features that relational databases use to keep you data safe are:

Relational Database are fast(very fast)

They have designed and had over 30 years of optimizations, which have made them very very fast at querying data. Some of Microsoft upgrades to there SQL server have been stunning. I have seen 2x to 10x performance upgrades just by upgrading to the latest version!!

One database I work on has 100,000,000’s of updates very year and hits peaks of 30 to 100 adds/updates per second and it isn’t even breaking a sweat. The database server is a Microsoft Azure P1 instance, so the actual hardware(my laptop is faster) it uses is very very low. You can buy hardware that would be 100x more powerful, so that is a lot of room for growth!

Next Steps

For you next project, I would recommend first modelling our your data structure on a white board. Think about what reports you will require and what indexes you will require. I am sure you will find for majority of applications a Relational Database is the best starting point. As you can always in the future use other Data Storage engines to supplement it required. I use Azure Blobs(file) and Tables(key/Value) to store data that doesn’t need to be in the database as required.

 

Image Credit: https://commons.wikimedia.org/wiki/File:Balanced_scale_of_Justice.svg

]]>
http://talkingtech.today/choosing-a-database-for-your-next-project/feed/ 0 66
Firewalls: ClearOS Review http://talkingtech.today/firewalls-clearos-review/ http://talkingtech.today/firewalls-clearos-review/#respond Tue, 26 Apr 2016 08:36:33 +0000 http://talkingtech.today/?p=57 Recently I have been hunting for a firewall to use for in my internal network. Primarily I needed to create a DMZ for some web servers so that they could not access my internal LAN. Also it needed to work with Hyper-V.

After looking at (and trying) a number of options I came across clearOS and I was presently surprised. It ticked all of the major boxes for me in and it was simple to set-up and has a fantastic UI.

So many firewall products have UI’s that are very difficult to use and can take ages to get the simplest firewall rule set-up, simply because finding the correct option takes ages.

In comparison clearOS is a breeze to use. The default set-up comes with a very clean UI which only has a few options, because of this it only takes minutes to get up and running. A lot of vendors could learn a lot from clearOS and how they have implemented their UI. The default set-up quickly provides a firewalled LAN with no fuss, perfect!

When you need to add extra functionality like 1 to 1 NAT, you simply open the clearOS Market Place and add the functionality you want to use. The new functionality loads quickly without a reboot. There are over 100 modules for you to add. So there is no shortage of functionality!!

ClearOS.sivladawn.lan_Marketplace

When configuring your network you have the usual options of External(for you internet connection), LAN (for your internal PC’s), DMZ(for any web servers) but they also add the option of a “Hot LAN” which is mixture of a LAN and DMZ. You can place your servers in the Hot LAN and know that your LAN will be still safe. The Hot LAN is very useful when you don’t have direct access to your public IP’s(mine are hidden behind another router) or need to use NAT.

ClearOS.sivladawn.lan - 1-to-1 NAT Firewall

So not only is clearOS simple to get started, it can easily be expanded to allow for much more complex environments. It’s the best of both worlds!

If you are looking for new firewall you should put clearOS at the top of your list. They also provide hardware with clearOS pre-installed and support(both free and paid).

 

 

]]>
http://talkingtech.today/firewalls-clearos-review/feed/ 0 57
Comparing Development Enviroments http://talkingtech.today/comparing-development-enviroments/ http://talkingtech.today/comparing-development-enviroments/#respond Mon, 25 Apr 2016 15:00:41 +0000 http://talkingtech.today/?p=47  

One of the challenges with development is choosing a computer language that enables coding for multiple platforms.  The more platforms you code for with a single language, the more productive you can be because you can share the code between the targets. Even for polyglot developers, rewriting code for different platforms is expensive and time consuming as there will be multiple code bases to maintain.

So, with regard to the major companies, who enables the best multi platform development?

Here is a table summarizing which platforms each company supports.

Apple Google Oracle Microsoft
iOS Yes Yes
OSX Yes  Yes Yes
Windows  Yes Yes
Android  Yes Yes
Linux Yes Yes
Web Server Yes Yes Yes
Web Client Yes Yes Yes

The bit that surprised me is that Microsoft ticks all the boxes!!! In that last few years Microsoft has really changed direction, and now seem determined to become the best all around development operating system with best tools!

Not only do they now support the majority of development environments. More and more of their products are now open source and the majority of their development tools also have free versions!

There are some links to Microsoft Tools to get you started:

iOS, OSX, AndriodC# Mobile Tools HTML – Cordova

WindowsWindows Tools

Linux.Net Core C++ Support Bash Shell

Web Server: Asp.Net Core

 

 

 

 

 

]]>
http://talkingtech.today/comparing-development-enviroments/feed/ 0 47
Microsoft’s 23 Year Old Data Loss Issue http://talkingtech.today/microsoft-data-loss-bug/ http://talkingtech.today/microsoft-data-loss-bug/#respond Sun, 21 Feb 2016 15:47:10 +0000 http://talkingtech.today/?p=28 Whenever you create a file in windows with a long file name(more the 255 characters) you risk losing that file and data forever. Since windows can not safely copy, move or even delete these files!

First a bit of history. Over 30 years ago when Microsoft created the 2nd version of the FAT file system it was only able to handle files and paths that were only 255 characters long. Today we are still stuck with that limitation.

Over time Microsoft has done massive changes to the File systems. The introduction of NTFS in 1993 removed these limitations, well sort of.

The problem was that Microsoft hasn’t updated it’s OS or Tools to correctly handle these long file names in the past 23 years.

So 23 YEARS later we still run the risk of losing all our data if any file path should exceed 255 characters. This is because the underlying Microsoft tools can’t use long file paths and simple file operations such as file copy or deleting files!

This affects the following Microsoft tools

I call it a bug, since the problems been known for 23+ years but still to this day Microsoft has made no effort to fix the issue (simple fix’s) and keep your data safe.

So the crazy bit is that Microsoft has had the ability to work around these limitations built into the OS for 23+ years. They just have chosen not to update any of their tools to use these new ability’s.

Even their more modern products like OneDrive and Powershell are still affected by this bug. I can’t understand why they still choose to write their modern tools in this way, with the same limitations,  as everything is built into the OS. It’s just pure madness.

To avoid this problem keep your file paths under 255 characters.

Fingers crossed Microsoft will fix these issues and take steps to keep our data safe.

Update 6/8/2016

Microsoft have updated their .Net api to have long file support by default!! .Net 4.6.2

Windows 10 anniversary Update

The file explorer still can’t delete folders with long file paths but since you can now install a Linux Bash Shell on Windows 10, you can use Linux to delete, copy and move folders with long filename in Windows 10. That’s crazy!!!

]]>
http://talkingtech.today/microsoft-data-loss-bug/feed/ 0 28
Home Office Setup http://talkingtech.today/home-office-setup/ http://talkingtech.today/home-office-setup/#respond Wed, 17 Feb 2016 16:13:11 +0000 http://talkingtech.today/?p=20 When working from home, your office set up makes the difference between success and failure.

These are the key factors in your office:

  • Distraction Free
  • Working Hours
  • Internet Connection
  • Computer Hardware
  • Chair
  • Desk
  • Phone

Distraction Free

This is a can be a deal breaker. This is an area in your house that you can work and not be disturbed.

This is even more important if you have children at home, since they will want to spend time with you, which can make working very difficult!. You need to set a rule that when you are working they are not to disturb you but make sure that know when you are finished so they can play with you.

Ideally you have a room that is dedicated for your office and a door that can be shut. Blocking out any distractions so you can get your work done in the shortest time possible. Then you have time for doing life, seeing friends, playing with you kids etc.

If you don’t have a space when you can work distraction free, consider renting a office at a co-working fallacy. A lot of these places have excellent facilities, are tax deductible and provide a really excellent divide between work and play

Working Hours

Set yourself a time to start work each day, before you know it, it will become a habit and that’s half the battle won. You are now at your desk and working.

Also it’s too easy to start working 24/7 when working from home and that is quickest way to burn out!! Studies show that when you exercise and take time away from your work you are more productive, so book in time away from your work.

Study’s also show that your overall productivity drops for every hour you work over 40 hours since your brain gets fatigued. Your 80 marathon weeks may not be producing more that a 40 week. So work smart and use the other 40 hours for having fun. Work life balance is probably the reason you are work from home, so don’t forget about it.

Internet Connection

A business internet connection is your best option for home work. Sure they cost more but the benefits out-way the cost.

The most important factor for your home office internet connection is how much it’s works. Check your ISPs SLA’s for time to repair a faulty connection.

Cheap home connections may be 3-4 days before they get repaired!!

Business Connections can be 4-24 hours. That is a lot better!

Now imagine not earning for 4 days because you cheap internet connection is down or you are unable to finish that vital project, that cheap connection is suddenly very expensive.

Most business connection also come with options to have static IP’s which allow you run servers at your office. Most home connection expressly forbid you running servers on the internet connection.

Also most IPS will give business connection traffic priority over consumer traffic. You can never have too much speed. Oh for 1 Gbps connection.

Computer Hardware

There are different priorities when buying for a home office.

The most important been that the when your computer fails, how you can get it fixed as quick as possible. I chose a HP workstation since you can purchase 4 hour on site warranty  for it!! So if anything fails, 4 hours later you are back up and running.

Compare that to a Mac Pro. If it fails you have to post it in a wait a week or so for it to be fixed. Or try and book a genius appointment at your local apple store and pray they have replacement parts on hand.

When you work for someone else it’s their problem if your computer breaks and you still get paid.

When you work for yourself, if you can’t work you don’t earn!! Always have a plan for getting working quickly again. That could be on site support or spare computer.

Chair

This is usually the bit of equipment that gets the least amount of thought, but there are a lot of benefits to a great chair. Most people sit in a chair over 180 hours month. Imagine the potential negative impact on your body with a bad chair.

Some things you wan to look for in a chair:

  • Supports you body
  • Moves with you body
  • Removes pressure points.
  • Head Rest
  • Arm Rest
  • Hight Adjustment
  • Lumbar Support

A great chair has been one of the best things I have brought!! Makes a day at the keyboard effortless.

Try a specialist company like http://www.totalbackcare.co.uk/ or http://www.back2.co.uk/ to find a chair. They will have way better advice than you run of the mill office chair store.

Phone

These days there are so many options for a phone. The main requirement is to allow your clients to contact you so an answer phone is essential.

Some options for a phone

  • Land Line
  • VOIP
  • Mobile
  • Skype (with number)

VOIP is one of the best options as it can grow with your company.You can use you mobile phone to take VOIP calls or redirect it to your mobile. You even can answer your calls from anywhere in the world!

<photo credit Home Office>

]]>
http://talkingtech.today/home-office-setup/feed/ 0 20